SVP Technology at Fiserv; large scale system architecture/infrastructure, tech geek, reading, learning, hiking, GeoCaching, ham radio, married, kids
15407 stories
·
123 followers

Palo Alto Networks is buying IBM's QRadar cloud security software assets and moving customers to its own platform; IBM will adopt Palo Alto products internally (Jordan Novet/CNBC)

1 Comment

Jordan Novet / CNBC:
Palo Alto Networks is buying IBM's QRadar cloud security software assets and moving customers to its own platform; IBM will adopt Palo Alto products internally  —  Palo Alto Networks is buying cloud security software assets from IBM as part of a broader partnership that will give …

Read the whole story
JayM
10 hours ago
reply
Interesting.
Atlanta, GA
Share this story
Delete

Oracle goes vegan: Dumps Terraform for OpenTofu

1 Share
Comments
Read the whole story
JayM
20 hours ago
reply
Atlanta, GA
Share this story
Delete

"Is This Project Still Maintained?"

1 Comment
Comments
Read the whole story
JayM
22 hours ago
reply
Ha!
Atlanta, GA
cjswayne
16 hours ago
LMAO
Share this story
Delete

MLAG Deep Dive: LAG Member Failures in VXLAN Fabrics

1 Share

In the Dealing with LAG Member Failures blog post, we figured out how easy it is to deal with a LAG member failure in a traditional MLAG cluster. The failover could happen in hardware, and even if it’s software-driven, it does not depend on the control plane.

Let’s add a bit of complexity and replace a traditional layer-2 fabric with a VXLAN fabric. The MLAG cluster members still use an MLAG peer link and an anycast VTEP IP address (more details).

Read the whole story
JayM
22 hours ago
reply
Atlanta, GA
Share this story
Delete

MicroSD cards' SBC days are numbered

1 Share
Comments
Read the whole story
JayM
22 hours ago
reply
Atlanta, GA
Share this story
Delete

Cisco vPC in VXLAN/EVPN Network – Part 4 – Fabric Peering

1 Share

Like I mentioned in a previous post, normally leafs don’t connect to leafs, but for vPC this is required. What if we don’t want to use physical interfaces for this interconnection? This is where fabric peering comes into play. Now, unfortunately my lab, which is virtual, does not support fabric peering so I will just introduce you to the concept. Let’s compare the traditional vPC to fabric peering, starting with traditional vPC:

The traditional vPC has the following pros and cons:

  • Pros:
    • No dependency on other devices for peer link and peer keepalive link.
    • No contention for bandwidth on interfaces as they are dedicated.
    • This also means no QoS configuration is required.
    • Intent of configuration is clear with dedicated interfaces.
  • Cons:
    • Requires dedicated interfaces that could be used for something else.
    • Interfaces have a cost, both from perspective of buying the switch, but also SFPs.

Now let’s compare that to fabric peering:

Fabric peering has the following pros and cons:

  • Pros:
    • No dedicated interfaces required.
    • Thus reducing cost.
    • Resiliency as there are multiple paths between the two switches.
  • Cons:
    • Dependency to other devices.
    • Dependency to underlay.
    • Contention for bandwidth with other traffic.
    • May require QoS.
    • May be more difficult to troubleshoot.

As you can see, it will save on cost, but it also means you have dependency to other physical devices and the underlay in form of routing protocol. In the diagram above, I only had the fabric peering link, but the peer keepalive is also required. This can also be routed across the underlay or you could use dedicated interfaces for it. I would lean towards having a dedicated interface for the keepalive as you want to avoid a split brain scenario at all costs. The peer keepalive is also very light weight so you can use for example a management interface for this, rather than using an expensive high bandwidth port.

Now, there is one more thing to mention when it comes to fabric peering. In a coming blog post I’m going to show you some of the pitfalls with vPC when a host is only connected to one of the switches, a so called orphan port. When Cisco came up with fabric peering they wanted to make this behavior more optimal. If you use fabric peering, this will affect how RT2 gets advertised by default. The orphan host will get advertised with the primary IP as opposed to to anycast IP, while vPC-connected hosts get advertised with the anycast IP. This is different to how traditional vPC behaves as it will advertised both orphan hosts and vPC-connected hosts with the anycast IP.

Note that fabric peering will require TCAM carving.

In the next post we’ll take a look at some of the potential pitfalls with vPC.

Read the whole story
JayM
22 hours ago
reply
Atlanta, GA
Share this story
Delete
Next Page of Stories