I have a Nexus 9K VPC construction that I want to connect to a Cisco Meraki stack. I need the stack for redundancy purposes, plus the Meraki side is doing some L3 routing as one unit.
What would be the best way to connect in terms of physical/logical interfaces?
See screenshot as to my current plan
This topology looks like it’s part of the implementation you described in this thread. In the diagram, it looks like you have well-designed connectivity between the Meraki switches and the Nexus switches. You have two port channels coming into the Meraki switches from two different Nexus switches which delivers redundancy.
Now I would suggest that these portchannels be Layer 3, and that the Nexus pair also perform L3 for the internal network. Although the Meraki devices are performing L3, any internal routing from subnet to subnet should not burden these edge devices, but should be routed by the Nexus pair.
You can apply HSRP to the Nexus pair to load balance traffic from your internal network. HSRP on Nexus devices (unlike IOS devices) performs automatic load distribution by default. I’m not as familiar with Meraki devices, but I’m sure there are mechanisms to help load balance across the two port channels you created to more appropriately distribute traffic.
When policing is configured on VLANs, it is the traffic on that VLAN that it is applied to. Physical L3 interfaces don’t belong to a VLAN, so such a configuration would not apply to traffic on those interfaces. You can however apply policing on the physical interface itself, but only as an independant policer.
For the rest of your concerns, I’m not exactly sure what the question is. Can you clarify?
Yes, indeed, it would be great if the Nexus devices were to perform everything, but I understand your dilema. Is using a /30 subnet obligatory? Since these are private network addresses, there shouldn’t be an issue with address wastage… Is using a larger subnet a deal breaker?
That’s what I was afraid - policing won’t work in such a scenario
Regarding the /30, these are actually not private addresses. Only the routing towards the ISP is done via private adresses. The actual subnets in our administration are all public IPs, so we want to make the best out of them, instead of wasting a /29 for each client with one small firewall.
It is what it is
Generally speaking, the Meraki line of switches falls somewhere between Cisco’s small business series of switches, and Cisco Catalyst switches. This means that Meraki switches are more robust, and provide higher capacities and features than the small business series. At the same time, they are not as suitable for extremely large and high-performance requirements of some enterprise and corporate networks, where Catalyst switches would be the most appropriate.
These are not hard and fast rules but are more a guideline. One thing that Meraki does very well compared with the other switches is cloud control of your network. Meraki switches (as well as firewalls, routers, access points, and even sensors and cameras), are all preconfigured to be cloud-managed with virtually no complex setup necessary. This makes them extremely attractive to those that want to deploy a large integrated and multi-faceted network with minimal setup time and easy configuration.
Thanks for the clarification. I understand the need to conserve addresses, and this indeed does introduce a limitation. Unfortunately, the use of vPC still maintains two separate and independant control planes, and you thus must have different IP addresses on the devices when using FHRPs of any kind. Other features such as VSS (which Nexus doesn’t support) combines two switches into a single control plane, requiring only one IP address for the SVI interface.
As you say, it is what it is, so you have to work around the limitations you are facing.