by admin

Netapp Aff Cluster Interconnect Switches

Netapp Clustered Ontap CN1610 Interconnect Switch SetupIn this tutorial we will run through the necessary steps in order to configure your Netapp CN1610 Interconnect Switches ready for Data Ontap Clustered Mode.Once you receive your Cluster Interconnect Switch they will be pre-configured from the factory with a very basic config. The steps below are additional to the factory config, to ensure proper management, alerting and logging is setup. Netapp CN1610 for Clustered Ontap – Setup Steps via Console(CN1610) #serviceport protocol none (Turns on STATIC IP for the service port)(CN1610) #network protocol none (Turns on STATIC IP for the network config)(CN1610) #serviceport ip (Assigns an ip address to the service port for management)(CN1610) #show service portInterface Status. UpIP Address 192.168.1.100Subnet Mask 255.255.255.0Default Gateway 192.168.1.1IPv6 Administrative Mode. EnabledIPv6 Prefix is.

Fe80::2a0:98ff:fee7:4bef/64Configured IPv4 Protocol. NoneConfigured IPv6 Protocol. NoneIPv6 AutoConfig Mode. DisabledBurned In MAC Address 00:A0:98:EE:AA:BB(CN1610) #show networkInterface Status UpIP Address.

0.0.0.0Subnet Mask. 0.0.0.0Default Gateway. 0.0.0.0IPv6 Administrative Mode. EnabledIPv6 Prefix is fe80::2a0:98ff:fee7:4bee/64Burned In MAC Address. 00:A0:98:EE:AA:BBLocally Administered MAC address 00:00:00:00:00:00MAC Address Type. Burned InConfigured IPv4 Protocol NoneConfigured IPv6 Protocol NoneIPv6 AutoConfig Mode. DisabledManagement VLAN ID.

Netapp Aff Cluster Interconnect Switches Diagram

Netapp Aff Cluster Interconnect Switches

NetApp Cluster, Management, Data and HA NetworksThe High Availability NetworkThe first network we’ll look at is the connection between our controllers.The Series platforms support either one or two controllers in the chassis. If we have just one controller in the chassis it's a single point of failure, so we’ll usually deploy two for redundancy, configured as a High Availability HA pair.When we have High Availability for our controllers on the FAS 2500 Series, the, there's two controllers in the same physical chassis. In that case, the HA connection between the controllers is internal in the chassis. We don't need to do any cabling for it.On the dual chassis FAS8060 and the FAS8080 EX, the two controllers in an HA pair are in physically separate chassis', so HA cables must be psychically connected between them.Let's have a look at those two different types. The picture below shows a FAS8040 or single chassis FAS8060 which have two controllers in the same chassis. In this case the HA connection is internal, we don't need to physically connect anything.

Netapp Aff Cluster Interconnect Switches Replacement

Dual Chassis High AvailabilitySee my next post for a full description of how High Availability works.The Cluster NetworkThe next network we'll look at is the Cluster Network. This is used for traffic that is going between the nodes themselves, such as system information that is being replicated between the nodes.

For example, when you make configuration changes it will be written to one node and then replicated to the rest. Also, if incoming client data traffic hits a network port on a different controller than the one which owns the disks, that traffic will also go over the cluster network.Your disks are always owned by one and only one controller. You can see in the example below we've got two nodes, Controller 1 and Controller 2.

Clients can access Aggregate 1 through either ControllerThe aggregate itself is always owned by only one controller. Both controllers are connected to their own and to their High Availability peer’s disks through SAS cables. The SAS cables are active-standby.

Aff

The Cluster NetworkControllers 1 and 2 are connected in an HA pair, and controllers 3 and 4 are also connected in an HA pair. All four controllers are in the same cluster. We have a pair of Cluster Interconnect Switches, and we connect each controller to both switches with 10Gb Ethernet connections. We also have inter-switch links between the two switches.The cluster interconnect is critical to the running of the system because it carries system information between the nodes.

Because it's so important to the system, you have to run it on private dedicated switches, you have to use the supported configuration, and you have to use a supported model of switch.Cluster Network SwitchesThe first supported switch is the NetApp CN1610. It can be used for clusters with up to eight nodes. It has sixteen 10Gb Ethernet ports, and four of those ports are used for the inter-switch links. Two Node ClusterCustomers asked NetApp the question, 'I've only got two nodes. Why can't I connect them directly to each other? Why do I have to buy separate switches which costs money, takes up rack space, power, and cooling, and is another component that can go wrong? I’d prefer just to connect the controllers directly to each other.'

NetApp listened to their customers and there is now support for 'switchless two node clusters'.The diagram below shows a FAS8040 or a single chassis FAS8060 configured as a switchless two node cluster. Rather than using separate external switches, we connect two 10Gb Ethernet ports on both controllers directly to each other. Switchless 2 Node ClusterNotice that this is different than High Availability HA. For HA, when we're using a single chassis, we don't need to cable anything, the connection is internal. For the Cluster Interconnect, even when the two controllers are in the same chassis, we have to physically cable them together. This uses two of our 10Gb Ethernet ports.If we have a dual chassis FAS8060 or the FAS8080EX, we're going to have a similar configuration. Again, we connect two of the 10Gb Ethernet ports on both controllers to each other.

The Management Network - Dual Switches (Recommended)This is not mandatory. If you only had one Management Network path and you lost it, your clients would still be able to access their data, you just wouldn't be able to make any changes until you'd fixed the problem.It's not really critical to have dual redundant management connections, but it is recommended by NetApp. If you do configure redundant management connections, it will take up another 1Gb Ethernet port in addition to e0m.

You won't be able to use it for client access if you use it for management.Management Network SwitchesIf you have up to eight nodes in the cluster then you can use the NetApp CN1601 as a management switch. It has sixteen 1Gb Ethernet ports. The Cisco Catalyst 2960 SwitchThe Data NetworkThe next network is the Data Network for our client data connections. This supports our NAS protocols of NFS or CIFS, and our SAN protocols of iSCSI, Fibre Channel, or Fibre Channel over Ethernet.The data network uses Ethernet or Fibre Channel ports on our controller, depending on the client access protocol.Unlike the Management Network, if you have a single Data Network path and it goes down then your clients will not be able to access their data. The Data Network is almost certainly going to be mission critical for your enterprise, so for sure you're going to have redundant paths.