Showing posts with label Networking. Show all posts
Showing posts with label Networking. Show all posts

Tuesday, October 3, 2017

IPMP basics

IPMP is an acronym for IP multi-path which roughly means resilience in terms of IP connectivity by means of some sort of redundancy provided by multiple paths of communication. This resilience is also commonly referred to as fault-tolerance In fact, multi-path is a general strategy concept used for resilience in critical subsystems. Another example could be MPIO which stands for multi-path I/O, but that's another story.

A necessary consequence of multiples paths is that performance can be enhanced as well as streaming data can flow through multiple paths in parallel. But, by the connection-oriented nature of TCP/IP this performance enhancement frequently narrows down to outbound traffic, that is, traffic flowing out of the IPMP system to remote clients.

IPMP has been available since older Solaris releases and I would say it has become progressively better and simpler to configure since its inception. According to another post of mine called Legacy & Future I'll be focusing on Solaris 11 as my discussions cut-point. Things started to get significantly simpler and better since Solaris 11 Express and really top-notch onward with Solaris 11.x.

I could only talk about Solaris 11.x but I'll address Solaris 11 Express because it's still a nice back-end system capable of running on x86 (32-bits) platform. As everybody knows, beyond mid-range and high-end big-iron SPARC systems, Solaris 11.x only runs on x86-64 (64-bits) platforms. Oracle has completely dropped support to Solaris 11 Express as it was marketed as a short-term transition from Solaris 10 to Solaris 11. The last update was SRU-13 or SRU-14 (focused on some Engineered-Systems). But the truth is that Solaris 11 Express is an awesome system to the near-zero or very small IT budgets business models based on legacy x86 hardware, and it still rivals much more recent Linux and BSD alternatives because it embeds very advanced key technologies such as ZFS and BEs (boot-environments), beyond, of course, other high-end technologies such as IPMP. So if you still have this piece software consider using it, specially because it's quite possible to independently update some of its crucial components and applications based on open software.

Back to IPMP the central idea is to group a given number of network interfaces and associate it to a pool of new (data) addresses by which the group will be publicly accessible. The group is materialized as a new network interface in the system which operation and availability is provided by the collaboration of the underlying group members. In general the number of members network interfaces should greater than the number of data addresses and some member network interfaces can each be set as a hot stand-by to the group. When stand-by network interfaces are present the IPMP group is said to be of an active-standby type, otherwise it's an active-active type. Unless you really have lots of network interfaces to spare an active-standby IPMP type would waste a precious network resource, hence otherwise prefer an active-active IPMP group.

NOTE
Sometimes there's some confusion, argumentation and comparison to another technology know as Link-Aggregation but things are quite different beats although both contribute to resilience and performance. One advantage of IPMP is that it operates on layer-3 thus possessing no layer-2 special driver and hardware requirements as Link-Aggregation dos. Both are not mutually exclusive and can even be combined, but perhaps each one is better suited to an specific scenario or requirement. For instance, a back-to-back connection between two servers is better implemented via Link-Aggregation while out-bound traffic load spread may be better deployed via IPMP.
Let's go straight to a minimal practical example, first on Solaris 11 Express and then on Solaris 11.3. Don't be fooled by the simplicity because the solution is still quite a lot powerful and significant to a many applications infrastructure models which is not easily attained, if at all, by more modern competitors systems. By the way, I will assume that some techniques and technologies (NCP, routes and name resolution) described for manual wired connections will be implicitly used as needed.

EXAMPLE:

Setting up an active-active IPMP group from interfaces net2 and net3 which link names have been respectively renamed from an e1000g2 and an e1000g3 originally available on the system.

# dladm show-phys
LINK      MEDIA         STATE      SPEED  DUPLEX    DEVICE
...
net2      Ethernet      unknown    0      half      e1000g2
net3      Ethernet      unknown    0      half      e1000g3

...
 
The newly generated network interface representing the new IPMP group will stop working only if both net2 and net3 fail simultaneously, but as long as both underlying interfaces remains operational up to 2 Gbps of overall outbound bandwidth will be available for multiple TCP connections, letting clear that still no more that 1 Gbps inbound bandwidth per single TCP connection.

NOTE
It may still not be crystal clear, but having N underlying interfaces of 1 Gbps on a given IPMP group will generally provide an overall outbound bandwidth of N Gbps for that IPMP group. The inbound bandwidth is a different story; if M < N data addresses are configured for an IPMP group of 1 Gbps underlying interfaces, then the overall inbound performance will still be limited to 1 Gbps per TCP session inbound traffic even though it may be possible to simultaneously have M such sessions.

On Solaris 11 Express:

Under Solaris 11 Express the update of the IPMP management interface is still transitioning and its crucial parts still must be managed via the old ifconfig command. Do not attempt to manage the underlying interfaces net2 and net3 via the new ipadm command for anything related to IPMP.

Configure the IPMP group and its data-address.
The group will subsequently receive the underlying member interfaces:
# ifconfig ipmp0 ipmp 192.168.1.230/24 up
Configure the underlying interfaces:
# ifconfig net2 plumb group ipmp0 up
# ifconfig net3 plumb
group ipmp0 up
 
NOTE
In the case of an active-standby configuration it would be necessary to choose one of the underlying interfaces as a stand-by interface by simply inserting the standby keyword just before the up keyword.
Verify the configuration:
# ifconfig -a |ggrep -A2 'ipmp0:'
ipmp0: flags=8001000842
<UP,BROADCAST,RUNNING,MULTICAST,IPv4,IPMP> ...
        inet 192.168.1.230 netmask ffffff00 broadcast 192.168.1.255
        groupname ipmp0


# ifconfig -a |ggrep -A3 -E 'net(2|3):'
net2: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> ...
      inet 0.0.0.0 netmask ff000000
      groupname ipmp0
      ether 8:0:27:fe:f6:44
net3: flags=1000843<UP,BROADCAST,RUNNING,MULTICAST,IPv4> ...
      inet 0.0.0.0 netmask ff000000
      groupname ipmp0
      ether 8:0:27:c3:94:2


# ipadm show-if |ggrep -E 'ipmp0|net2|net3'
ipmp0  ok   bm--I-----4- ---
net2   ok   bm--------4- ---
net3   ok   bm--------4- ---


# ipadm show-addr 'ipmp0/'
ADDROBJ    TYPE     STATE  ADDR
ipmp0/?    static   ok     192.168.1.230/24


# ipmpstat -g
GROUP   GROUPNAME  STATE  FDT  INTERFACES
ipmp0   ipmp0      ok     --   net3 net2


# ipmpstat -i
INTERFACE ACTIVE GROUP  FLAGS   LINK PROBE    STATE
net3      yes    ipmp0  ------- up   disabled ok
net2      yes    ipmp0  --mb--- up   disabled ok
Make the configuration persistent across reboots:
(the order of parameters below is important in obtaining the exact results)
# cat /etc/hostname.ipmp0
ipmp 192.168.1.230/24 up

# cat /etc/hostname.net2
group ipmp0 up

# cat /etc/hostname.net3
group ipmp0 up
To eventually disable and clean up the IPMP group:
# rm /etc/hostname.net3
# rm /etc/hostname.net2
# rm /etc/hostname.ipmp0
 
# ifconfig ipmp0 down
# ifconfig net2 down
# ifconfig net3 down
 
# ifconfig net2 group ""
# ifconfig net3 group ""

# ifconfig net2 unplumb
# ifconfig net3 unplumb
# ifconfig ipmp0 unplumb

On Solaris 11.3:

Under Solaris 11.3 things are somewhat easier. The IPMP management has been fully integrated into the ipadm command and the case for persistency across reboots is on by default requring no additional actions.

Configure the underlying interfaces:
# ipadm create-ip net2
# ipadm create-ip net3
Configure the IPMP group:
# ipadm create-ipmp -i net2,net3 ipmp0
Set the data address for the IPMP group:
# ipadm create-addr -T static -a 192.168.1.230/24 ipmp0
ipmp0/v4


NOTE

Unfortunately, perhaps due to some subtle bug in the GA release of Solaris 11.3, it seems safer to only set the IPMP group data-address after the underlying interfaces have been added to the IPMP group.
To eventually disable and clean up the IPMP group:
(the order is important, again, due to some subtle bug)
# ipadm delete-addr ipmp0/v4
# ipadm remove-ipmp -i net2,net3 ipmp0
# ipadm delete-ipmp ipmp0
# ipadm delete-ip net2
# ipadm delete-ip net3
In the rare case where a standby underlying interface is still desired, for instance net4, it suffices to perform the following commands:
# ipadm create-ip net4
# ipadm set-ifprop -p standby=on -m ip net4
# ipadm add-ipmp -i net4 ipmp0
 
# ipadm show-if
IFNAME   CLASS    STATE   ACTIVE OVER
lo0      loopback ok      yes    --
ipmp0    ipmp     ok      yes    net2 net3 net4
net2     ip       ok      yes    --
net3     ip       ok      yes    --
net4     ip       ok      no     --


# ipmpstat -g

GROUP    GROUPNAME  STATE  FDT  INTERFACES
ipmp0    ipmp0      ok     --   net3 net2 (net4)

That's all very powerful and not that difficult to set up.
For sure one more cool technology available in Solaris!
   

Sunday, March 13, 2016

DNS client configuration

This is a quick web log on how to manually configure the DNS client.
I already have a few other posts on DNS but this topic was missing.
The closest I've logged was part of an AI Configuration Profile.

I assume that an active (on-line) DefaultFixed NCP is established.
The DNS server IP address is 192.168.10.1 and domain is business.corp.

If old or damaged configuration is present, the following command should probably remove any old or bad setting allowing a start over from scratch configuration such as on a fresh system with any DNS configuration:

# nscfg unconfig -v svc:/network/dns/client:default
unconfiguring DNS...
Delete customizations.
Refresh ... : .../svccfg -s .../dns/client:default refresh
successful unconfigure.


To manually configure the DNS client, perform the following:
(config/domain is unnecessary if don't run DNS on your LAN)
(for more DNS client SMF properties check resolv.conf(4))

# svccfg -s dns/client
...> listprop config/*
config/value_authorization astring     solaris.smf...

...> setprop config/nameserver = net_address: (192.168.10.1) 
...> setprop config/domain = astring: ("business.corp") 
...> listprop config/*
config/value_authorization astring     solaris.smf...
config/nameserver          net_address 192.168.10.1
config/domain              astring     business.corp

...> select default
.../dns/client:default> refresh
 
.../dns/client:default> quit

 
For legacy compatibility do:
 
# nscfg export -v svc:/network/dns/client:default
exporting DNS legacy...
Looking in /etc/mnttab, for lofs mount.
Save new legacy file...
    Legacy contents identical. skip save
successful export.
No change to FMRI: svc:/network/dns/client:default


And check...

# cat /etc/resolv.conf 
#
# _AUTOGENERATED_FROM_SMF_V1_
#
# WARNING: THIS FILE GENERATED FROM SMF DATA.
#   DO NOT EDIT THIS FILE.  EDITS WILL BE LOST.
# See resolv.conf(4) for details.

domain    business.corp
nameserver    192.168.10.1

    

And to finish, of course, (if not already so) we have to tell the system to consider the above DNS client configuration on host name resolutions (I have another post with additional details about name-services/switch):

# svccfg -s name-service/switch 
...> setprop config/host = astring: "files dns" 
...> select default 
.../name-service/switch:default> refresh 
.../name-service/switch:default> quit

Make sure the affected SMF services are running:

# svcs dns/client name-service/switch
STATE   STIME    FMRI
online  14:05:32 svc:/system/name-service/switch:default
online  14:41:07 svc:/network/dns/client:default

     
NOTE
Under Solaris 11 Express the dns/client SMF service just manages the associated daemon and doesn't hold any user configuration yet. The DNS client configuration is still done by editing the /etc/resolv.conf file directly, as usual.
  

Manual wired connection

This post is to log the most basic network setup to a Solaris 11 box.
Since Solaris 11/11 the network configuration procedure evolved.
While a lot of great things kicked in, it became more complex.
By Solaris 11.3 the procedures seems somewhat stable.

To make justice, the Solaris on-line documentation has been always great and under a continuous effort of improvement and correctness.

But when all that's required is a straightforward old and good static IP configuration to a standard wired ethernet network, the lots of new frameworks and subsystems may get in the way. The more you're in a hush, the more these small complexities can drive you mad. So, I tried to write this post as a sort of a more strait "complete" yet "minimalist" example.

As a minimum I assume:
  • An on-line DefaultFixed NCP.
  • A wired link that was already renamed to e0.
  • Basic DNS services just for Internet access.
  • Basic /etc/hosts for local host name resolution.
  • The host being configured is box-01 at 192.168.10.10
  • The gateway is at 192.168.10.1

CLEAN-UP

If you have some left over configuration from other scenarios or from failed attempts, you can try some of following commands in order to start over from the scratch and follow the rest of this post to the bottom:

# ipadm delete-addr e0/v4
...

# ipadm delete-ip e0
...

# route -p show
...

# route -p delete ...
...

# route flush
...
  
Visit the post DNS client configuration for its specific clean-up.

CONFIGURATION

Create the e0 ip interface over the link of the same name.
(I simply don't have any reason to use different names right now)

# ipadm create-ip e0
# ipadm show-if

IFNAME     CLASS    STATE    ACTIVE OVER
lo0        loopback ok       yes    --
e0         ip       down     no     --


Set up the static IP address to be used by the interface.
(An unambiguous /etc/hosts entry can be used instead of the IP)
(Depending on the IP, it may require a /prefixlen or /etc/netmasks)

# ipadm create-addr -T static -a 192.168.10.10 e0/v4
# ipadm show-addr

lo0/v4            static   ok           127.0.0.1/8
e0/v4             static   ok           192.168.10.10/24
lo0/v6            static   ok           ::1/128


Set up the persistent route to the default gateway to be used.
(Assume that the default gateway is at 192.168.10.1)

# route -p add default 192.168.10.1
add net default: gateway 192.168.10.1
add persistent net default: gateway 192.168.10.1


# route -p show
persistent: route add default 192.168.10.1


# grep default /etc/inet/static_routes-DefaultFixed
default 192.168.10.1


# netstat -rn -f inet 

Routing Table: IPv4
 Destination     Gateway     Flags Ref  Use   Interface

------------- -------------- ----- --- ------ ---------
default       192.168.10.1   UG      8   7774          
127.0.0.1     127.0.0.1      UH      2   2694 lo0      
192.168.10.0  192.168.10.10  U       3    196 e0
       

Next, configure the DNS client accordingly.

Finally, check if you have a reasonably basic /etc/hosts in place:
   
# cat /etc/hosts
#
# ...
#

::1             localhost
127.0.0.1       localhost              loghost
#
192.168.2.10    box-01.business.corp   box-01

  

Friday, August 1, 2014

DNS configuration file

By default, the DNS configuration file is /etc/named.conf.
The location of this file is good and bad at the same time.
It's good because it's on a standard UNIX location.
It's bad because it isn't on dedicated directory.
 
In order to improve administration it's necessary to dedicate a more stable directory and decouple, as much as possible, configuration detail that are subject to more frequent changes (DNS zone data) from those that don't, such as global options.
 
Consider all the assumptions presented in my DNS configuration.
There are two scenarios, one of them specific to a DNS internal root.
 
I) The DNS internal root main configuration file could be:
    (This is for internal root servers A, B, C and D)
 
#
#       Business Corp.
#
#       DNS internal root main configuration file.
#       Global options should be gathered on this file.
#       last update:  August 1, 2014.

#
 
options {
  version none;
  directory "/var/named";
  # ...
};
 
# Internal root.
zone "." in {
  type master;
  file "db.root";

  recursion no;
};

  
# Loopback zone.
zone "0.0.127.in-addr.arpa." in {
  type master;
  file "db.127.0.0";

  notify no;
};

    
# End of File.
      
II) The internal DNS server main configuration file could be:
    (This is for internal top-level servers NS00, NS01 and NS02

#
#       Business Corp.
#
#       DNS internal server main configuration file.
#       Global options should be gathered on this file.
#       last update:  August 1, 2014.

#
  
options {
  version none;
  directory "/var/named";
  # ...
};
 
# Internal root.
zone "." in {
  type hint;
  file "db.cache";

  recursion no;
};


# Loopback zone.
zone "0.0.127.in-addr.arpa." in {
  type master;
  file "db.127.0.0";

  notify no;
};

  
# Zones data (more frequently changed)
include "named.zones";
  
# End of File.

As soon as I'm satisfied with the global options the file won't change.
This is precisely my intention: administration limited to /var/named.
  
The file /var/named/named.zones will have other nested includes.
Most probably or ideally one additional nesting (include file) per zone.
   

DNS zone data source files

There is a tedious aspect of setting up DNS zone data.
It how it will or should be organized within the file system.
This particular post seeks to address this specific point.

I'll take the same approach used for NIS maps' source files.
Please, visit that other post for a longer description and consideration.
 
# zfs create rpool/VARSHARE/named

# zfs list -t all -r rpool/VARSHARE
NAME                  USED  AVAIL  REFER  MOUNTPOINT
rpool/VARSHARE         52K  11.8G    40K  /var/share
rpool/VARSHARE/named   31K  11.8G    31K  /var/share/named


# chmod -R 750 /var/share/named

# ln -s /var/shared/named /var/named
# ls -lh /var | grep ^l
...

lrwxrwxrwx   1 root     root ... dns -> /var/share/named
...

For further organization no additional ZFS file systems are needed.
A simple directory structure within /var/named will do.
  
Configure the directory option accordingly in /etc/named.conf:

options {
  # ...
  directory "/var/named";
  # ...
};
  

DNS zone data

Apart from installing and configuring DNS itself, a crucial preliminary step is to structure and prepare the DNS zone data source files. In what follows, I assume that all the structure and contents have been addressed as defined on the preceding posts (previous links).
  
Take the internal DNS servers NS00 thru NS02 (below DNS internal roots).
Their named.zones included by /etc/named.conf could be as follows:
  
zone "business.corp" {
  type master;
  file "master/db.business.corp";
};
 
zone "10.in-addr.arpa" {
  type master;
  file "master/db.10";
};
 
zone "168.192.in-addr.arpa" {
  type master;
  file "master/db.192.168";
};
   
NOTE
Of course, it's not recommended to have a multi-master setup.
This means, just as example, that only NS00 should be master.
Hence, it suffices to substitute master for slave for NS01 and NS02.
The contents of each of the above zone data file in master is as follows:

I) business.corp

;
;       Business Corp.
;
;       Internal DNS (top-level) server forward zone.
;       last update:  August 5, 2014.

;

 
$TTL 3h 

@  IN  SOA  NS00.business.corp.  hostmaster.business.corp.  ( 
            1    ; Serial 
            3h   ; Refresh after 3 hours 
            1h   ; Retry after 1 hour 
            1w   ; Expire after 1 week 
            1h ) ; Negative caching TTL of 1 hour

; Authoritative name servers.


                    IN  NS  NS00.business.corp. 
                    IN  NS  NS01.business.corp.
                    IN  NS  NS02.business.corp.

; The internal root servers A records.

A                   IN  A  10.0.0.10
B                   IN  A  10.0.0.20
C                   IN  A  10.0.0.30
D                   IN  A  10.0.0.40


; The internal top-level servers A records.

NS00                IN  A  10.0.1.10
NS01                IN  A  10.0.1.20

NS02                IN  A  10.0.1.30

; Other internal hosts A records.

; ...
 
; End of File.

II) 10.in-addr.arpa

;
;       Business Corp.
;
;       Internal DNS (top-level) server reverse zone.
;       last update:  August 5, 2014.

;

 
$TTL 3h 

@  IN  SOA  NS00.business.corp.  hostmaster.business.corp.  ( 
            1    ; Serial 
            3h   ; Refresh after 3 hours 
            1h   ; Retry after 1 hour 
            1w   ; Expire after 1 week 
            1h ) ; Negative caching TTL of 1 hour

; Authoritative name servers.


                    IN  NS  NS00.business.corp. 
                    IN  NS  NS01.business.corp.
                    IN  NS  NS02.business.corp.

; The internal root servers PTR records.

10.0.0              IN  PTR A.business.corp.
20.0.0              IN 
PTR B.business.corp.
30.0.0              IN  PTR C.business.corp.
40.0.0              IN  PTR D.business.corp.

; The internal top-level servers PTR records.

10.1.0              IN  PTR NS00.business.corp.
20.1.0              IN  PTR NS01.business.corp.
30.1.0              IN  PTR NS02.business.corp.

; Other internal hosts PTR records.

; ...
 
; End of File.

III) 168.192.in-addr.arpa

;
;       Business Corp.
;
;       Internal DNS (top-level) server reverse zone.
;       last update:  August 5, 2014.

;

 
$TTL 3h 

@  IN  SOA  NS00.business.corp.  hostmaster.business.corp.  ( 
            1    ; Serial 
            3h   ; Refresh after 3 hours 
            1h   ; Retry after 1 hour 
            1w   ; Expire after 1 week 
            1h ) ; Negative caching TTL of 1 hour

; Authoritative name servers.


                    IN  NS  NS00.business.corp. 
                    IN  NS  NS01.business.corp.
                    IN  NS  NS02.business.corp.

; Other internal hosts PTR records.

; ...

; End of File.

  

Thursday, July 31, 2014

DNS loopback zone

The loopback zone is part of a DNS configuration.
Its purpose is to handle the 127.0.0.0/24 network.
By convention and good practice each DNS server must handle it.
Naturally, the above recommendation doesn't apply to DNS root servers. 
In general the localhost number is 127.0.0.1.
Hence, the zone file is called db.127.0.0.

Consider the example given on the post DNS internal root.
The top-level (below DNS internal roots) internal DNS servers are:
  • NS00.business.corp
  • NS01.business.corp 
  • NS02.business.corp 
  
Each of them would have the following loopback zone configuration:
(the following are the contents of db.127.0.0)

;  
;       Business Corp.  
;  
;       The loopback zone.
;       last update:    July 31, 2014.
 
;

$TTL 3h

@  IN  SOA  NS00.business.corp.  hostmaster.business.corp.  (
            1    ; Serial
            3h   ; Refresh after 3 hours
            1h   ; Retry after 1 hour
            1w   ; Expire after 1 week
            1h ) ; Negative caching TTL of 1 hour


; Authoritative name servers.
 
   IN  NS  NS00.business.corp.
   IN  NS  NS01.business.corp.

   IN  NS  NS02.business.corp.

; The localhost PTR record.
 
1  IN PTR localhost.

; End of File.
 
In this particular case /etc/named.conf must contain:

zone "0.0.127.in-addr.arpa." in {
  type master;
  file "db.127.0.0";

  notify no;
};

   

DNS internal root

A DNS internal root is a DNS configuration for an internal root domain ".". The DNS internal root servers are positioned within the organization's network and behind a firewall. Their configuration somewhat mimics that of a standard DNS root hints but deals only with internal servers and internal top-level domains.

Using an internal root is more flexible and secure.
It's also more scalable than extensively forwarding.

As an example, assume that:

  • The internal DNS domain is business.corp.
    The company's name is Business Corp.
     
  • The following networks are used:
    • 192.168.0.0/16    (branch offices)
    • 10.0.0.0/8        (headquarters)
       
  • The internal root servers are:
    • A.business.corp
    • B.business.corp
    • C.business.corp
    • D.business.corp 
       
  • The top-level (below root) internal servers are:
    • NS00.business.corp
    • NS01.business.corp
    • NS02.business.corp
    
The internal root file conventionally called db.root could be:
   
;
;       Business Corp.
;
;       Internal DNS root and domains.
;       last update:    July 31, 2014.

;
 
$TTL 1d


.  IN  SOA  A.business.corp.  hostmaster.business.corp.  (
            1    ; serial
            3h   ; refresh
            1h   ; retry
            1w   ; expire
            1h ) ; negative caching TTL

  
; The internal root servers.

   IN  NS  A.business.corp.
   IN  NS  B.business.corp.

   IN  NS  C.business.corp.
   IN  NS  D.business.corp.


; The internal root servers addresses.

A.business.corp.    IN  A  10.0.0.10
B.business.corp.    IN  A  10.0.0.20
C.business.corp.    IN  A  10.0.0.30
D.business.corp.    IN  A  10.0.0.4 

; The internal domains and their authoritative servers.

business.corp.            IN  NS  NS00.business.corp. 
                          IN  NS  NS01.business.corp. 
                          IN  NS  NS02.business.corp.
  
10.in-addr.arpa.          IN  NS  NS00.business.corp.
                          IN  NS  NS01.business.corp.
                          IN  NS  NS02.business.corp. 
  
168.192.in-addr.arpa.     IN  NS  NS00.business.corp.
                          IN  NS  NS01.business.corp.
                          IN  NS  NS02.business.corp. 
  
; End of File.
  
Naturally, NS00 thru NS02 further delegate as necessary.
In this particular case, the /etc/named.conf of the root servers has:

zone "." in {
  type master;
  file "db.root";

  recursion no;
};


NOTE
Not all of the root servers must be master for the "." zone.
Of course, at a minimum, just one of them needs to be, as usual.
Other internal DNS servers must use these internal DNS root servers.
These specifics are covered on another post: internal DNS servers.