[ntp:hackers] What does "interface listen wildcard" do?

Brian Utterback brian.utterback at oracle.com
Fri Jul 12 13:23:41 UTC 2013


On 7/12/2013 12:25 AM, Philip Prindeville wrote:
> 1. This used to work.
> Lots of buggy or dangerous behavior "used to work". If it no longer does, that's usually a "win".

Funny thing about that argument. If something stops working that used to 
work, if it was a feature, it not working is a bug, if it was a bug, now 
it is a feature. Accepting broadcast packets only stopped working as a 
side effect of an implementation detail. Nobody said before hand, "hmm, 
we should stop accepting broadcast packets".

>
>> 2. Users expect it to work.
> Users expect to use weak passwords and not be hacked. Thank God for adult supervision.

Okay, give me an argument that I can tell a customer why using limited 
broadcast for their broadcast client and servers was a bad idea that 
doesn't depend on implementation details that doesn't also apply to 
using directed broadcasts.


>
>
>> 3. I know of no network "best practice" or other document that even hints that one should used directed broadcasts in preference to undirected broadcasts. They each have specific uses and cannot replace one another.
> Then you should refer to the IPv4 "Router Requirements" RFC-1812, specifically:
> [quote elided]

Totally irrelevant to the issue at hand. There are very good reasons for 
a *router* to use directed broadcasts in lieu of limited broadcasts. 
Show me something in the HOSTS Requirements, or an RFC dealing with 
application level issue. Or even a book dealing with application 
programming that suggests it. Anything, I'm willing to listen.

> Limited broadcast packets are dangerous. You can broadcast storms and denial-of-service attacks that are leveraged via broadcasts. They are often NOT sent with a TTL of 1 as they are required to be.

Unicast packets are dangerous too as are directed broadcast packets. If 
your router doesn't forward limited broadcast packets, then directed 
broadcasts are *more* dangerous than limited broadcast packets.

>
> There are also broken routers which violate this RFC 1812 requirement:
>
>     (c) { -1, -1 }
>
>           Limited broadcast.  It MUST NOT be used as a source address.
>
>           A datagram with this destination address will be received by
>           every host and router on the connected physical network, but
>           will not be forwarded outside that network.
>
> Then again, accepting directed broadcasts from outside your administrative domain is also brain-dead, but lots of people allow that too.

Broken routers can create havoc no matter what you do. But as a comment, 
IPv6 taught us that in the general case, even determining what the 
"administrative domain" is, is none trivial.

>
>
>> 4. I know of at least one major router vendor whose NTP implementation does not allow the admin to set the broadcast address used by router for broadcast packets.
>
> That's arguably not a bad thing. The broadcast address to use for a subnet broadcast is bound to the interface state, and should not be overridden by an application.  All NTP should care about is what interfaces it's sending [broadcast] packets out, and it can inherit that information from the interface.

Agreed, but what if the broadcast address used is the limited broadcast? 
I know that the implementation on this router is version 3 and at that 
time everyone used limited broadcast for that purpose.

>
>
>> 5. I know of at least one major router vendor whose routers automatically convert directed broadcasts passing through the router into undirected broadcasts when the specified sub-net is reached.
> That's extremely brain-dead. I would not buy a router from this manufacturer. Rewriting IP header fields (other than the TTL) is not to be done lightly.

I will say that this router manufacturer has a very significant market 
share. The behavior may also be configurable, I don't know.

>
>
>> 6. The creation of subnetting was specifically designed so that the applications do not need to know the subnet masks of the adjacent sub-nets. Using directed broadcasts violates this principle and will probably break many configurations of virtual networking, certainly those using routers I mentioned in point 5.
> No, subnetting was created to allow for more efficient use of network address space. Quoting RFC 950:

> [quote elided]
>
> But to address your actual point, if you don't know the topology of adjacent subnets and their masks, then you probably have no business communicating with them anyway as you might be an exploitable attack surface for a DDoS attack.

You misunderstand. I am not saying that allowing applications to work 
without knowing the subnet mask was the motivation, I am saying it was a 
design goal, as part of its robustness. A system does not need to know 
the subnet mask to find or be configured with a default router. It 
doesn't need to know the subnet mask to broadcast to all of the other 
systems on the same subnet and it doesn't need to know the subnet mask 
to send unicast packets.


>
>> On a related but mostly independent note:
>>
>> I think we made a big mistake in 2004, extending the usage of the system of binding all of the interfaces as documented in bug 314, instead of trying to eliminate it. At the time I was concerned about how prevalent IP_PKTINFO was, particularly since it wasn't available in Solaris. But I think it is now available in most platforms. Other than the single issue of whether of not it is a supported part of all of our supported platforms, there is no other argument against using IP_PKTINFO documented in bug 314 that I think has held up nine years on.
> You could use SOCK_RAW as a work-around for platforms not providing IP_PKTINFO.  On Linux you could additionally use IP_RECVORIGDSTADDR.

Yep. So I don't see any reason to bind all of the interfaces anymore.  I 
think it would reduce the complexity of ntp_io.c considerably to 
eliminate that necessity.

In fact, if we can count on having receive TIMESTAMP ancillary data 
available, we could eliminate the need for the whole interrupt driven 
receive buffers framework, except for refclocks. If we resurrected the 
old serial stream module that embedded the timestamp, we could eliminate 
it even for most refclocks. That may or may not be feasible for all 
platforms however.

The ntp_io.c file has more than doubled since the days of xntpd. I bet 
we could get it down to less than 3/4 of its old size if we refactored 
it like this.

Brian Utterback


More information about the hackers mailing list