Re: Problem with Multicast and VRF Lite

From: John Neiberger <jneiberger_at_gmail.com>
Date: Tue, 8 Mar 2011 20:42:16 -0700

Sorry, one last quick note. I'm also doing just to learn some of this
technology. So, what started out as just a learning experience may
have an application back at work if I could pull it off in a
relatively simple way. Part of me just wants to find some cool way to
make it work whether or not it's a good idea. :)

On Tue, Mar 8, 2011 at 8:38 PM, John Neiberger <jneiberger_at_gmail.com> wrote:
> Now that I think about it, a workaround would be to create the VRFs on
> the router and then just run a cable from the source VRF to the other
> "lab" VRFs. We wouldn't even have to worry about this other stuff.
> However, that would chew up interfaces a lot faster. Then again, it's
> not like we're going to have more than a handful of VRFs, so that
> might just be the simplest and most elegant solution.
>
> On Tue, Mar 8, 2011 at 8:35 PM, John Neiberger <jneiberger_at_gmail.com> wrote:
>> Thanks for all that info! This started out as just me playing with VRF
>> Lite because we may want to use it at work for a particular
>> application. There is another lab area where I might want to use this
>> other idea I'm trying to get to work. Basically, we want to have a set
>> of lab routers and switches that are subdivided into multiple virtual
>> labs that are almost entirely separate from one another. In this lab,
>> we have a large number of multicast video sources. I'd like to be able
>> to pull those sources onto a router into a single VRF. Then create
>> VRFs for those other virtual lab instances, then somehow allow these
>> other VRFs to join the multicast streams that are present in the
>> "source" VRF.
>>
>> It's basically a thought we had that would allow us to create new lab
>> setups basically ad-hoc and would give them relatively easy access to
>> our multicast source streams without having to add new physical
>> connections into the actual source network, and it would also allow us
>> to have multiple labs doing different things. For example, one virtual
>> lab could be testing some new video encoding equipment from one
>> vendor, while another lab might be testing video compression equipment
>> from yet a different vendor. And lastly, we could have a sandbox lab
>> where the engineering group tries new things, all of which could be
>> done without interfering with the other VRFs or the source network.
>>
>> I'm sort of rambling, so I hope that makes some sense. :) I'm just
>> trying to toy around with it right now. I've been labbing it up in
>> GNS3 and talking to other engineers about it. If we can pull it off
>> for real, though, it would be on 7600s and ASR9Ks.
>>
>> Thanks again,
>> John
>>
>> On Tue, Mar 8, 2011 at 8:22 PM, Brian McGahan <bmcgahan_at_ine.com> wrote:
>>> Hi John,
>>>
>>> I don't think MDT is going to help you in this design. Basically an MDT is multipoint GRE tunnel. PE routers that support the same VRF join the same "default" MDT. GRE packets are sent from the PE router with a source address of the Loopback (typically) and the multicast destination address of the default MDT. The PE routers already know each other's Loopback addresses, because these are the next-hop values for VPNv4 (MPLS BGP) routes. The MDT address is usually in the SSM multicast range. In that case each PE sends an (S,G) PIM Join message for the other PE routers, where S is their Loopback addresses, and G is the MDT default address. The final result is that each PE automatically forms a GRE tunnel to the other PEs using that MDT address.
>>>
>>> Once the tunnel is up, PIM adjacencies form over them. The MDT (GRE) tunnel is treated just like any other PIM enabled interface in the incoming or outgoing interface list. The MDT can participate in PIM dense, PIM sparse with an RP, PIM sparse with SSM, or bidirectional PIM, just like any other PIM enabled link. When multicast traffic is received from a CE and it is destined for another VPN site, regardless of the destination address, it is GRE encapsulated with the source address of the PE's Loopback and the destination address of the MDT. Since all other PEs are listening for this (S,G), they all receive it.
>>>
>>> If you want to you can also configure a "data" MDT. This is an additional range of addresses that can be used for new GRE tunnels between specific PEs, in order to optimize the traffic flow. With the default MDT, when one site sends a multicast packet out to the MDT, all other sites receive it, even if they don't want it. Data MDT can fix this by creating new tunnels depending on which PEs want which groups from the CE. It's kind of like 1:1 NAT, but the addresses are still GRE encapsulated instead of translated.
>>>
>>> I believe the problem that you'll have in your case is that VRF-lite isn't really designed to leak traffic between different routing tables; the whole point of the VRF is to keep the routes and traffic separate. It might be possible to hack this up by creating a physical loopback on the router, e.g. plugging GigE0/0 into GigE0/1, and then configuring the interfaces into different VRFs. This way you'd be able to leak the traffic between tables because the router basically wouldn't know that it's peering with itself. This solution is probably more trouble than it's worth though.
>>>
>>> Also MDT isn't a VRF feature per-se, it's an MPLS L3 VPN feature. L3 VPN is made up of two parts, the VRF and the VPNv4 BGP. VRF-lite skips over the VPNv4 BGP part, so even if you used multiple routers, they wouldn't be able to figure out how to form the GRE tunnels since they don't know each other's Loopbacks as VPNv4 BGP next-hops.
>>>
>>> What exactly are you trying to solve with your design? Let me know more of the specifics of what the end result you want to be, and I'm sure there are other feasible solutions we could figure out to do this.
>>>
>>>
>>> HTH,
>>>
>>> Brian McGahan, CCIE #8593 (R&S/SP/Security)
>>> bmcgahan_at_INE.com
>>>
>>> Internetwork Expert, Inc.
>>> http://www.INE.com
>>> Toll Free: 877-224-8987 x 705
>>> Outside US: 775-826-4344 x 705
>>> Online Community: http://www.IEOC.com
>>> CCIE Blog: http://blog.INE.com
>>>
>>>
>>> -----Original Message-----
>>> From: nobody_at_groupstudy.com [mailto:nobody_at_groupstudy.com] On Behalf Of John Neiberger
>>> Sent: Tuesday, March 08, 2011 7:56 PM
>>> To: Rich Collins
>>> Cc: ccielab_at_groupstudy.com
>>> Subject: Re: Problem with Multicast and VRF Lite
>>>
>>> Yeah, the SSM part is throwing me. Besides, I don't really understand
>>> how a lot of this works yet. I was just trying to fight through it to
>>> see if I could make it work. I've read the command reference for mdt
>>> and I still don't understand what it actually is doing. For example, I
>>> don't understand what the group range I assign to the command is used
>>> for. Does it convert my actual source groups to the group addresses I
>>> specify in the mdt data command? If not, what is it actually doing?
>>>
>>> Ultimately, I want several VRFs with receivers to pull data from
>>> another VRF that contains nothing but multicast sources.
>>>
>>> -John
>>>
>>> On Tue, Mar 8, 2011 at 6:30 PM, Rich Collins <nilsi2002_at_gmail.com> wrote:
>>>> Hi,
>>>>
>>>> I tried an example similar to yours with a static rp-address(one one
>>>> side only) and no mdt. That seemed to work fine. I have to review
>>>> SSM to try it again to more closely match your example.
>>>>
>>>> I suppose you don't need an mdt GRE tunnel since you are not trying to
>>>> link two PE's across the core backbone.
>>>>
>>>> -Rich
>>>>
>>>> On Sat, Mar 5, 2011 at 11:33 PM, John Neiberger <jneiberger_at_gmail.com> wrote:
>>>>> I've never configured VRF Lite or multiprotocol BGP until today. I'll
>>>>> be up front and say that I really don't know what I'm doing with it.
>>>>> lol I'm trying to simulate a lab scenario that I want to create in a
>>>>> real lab at work. I'm using GNS3 at the moment. I have two VRFs for
>>>>> multicast receivers (Orange and Blue) and a VRF called Shared for my
>>>>> multicast sources. Here's a simplified network diagram only dealing
>>>>> with the Orange VRF:
>>>>>
>>>>> A -------- B -------- C --------- D
>>>>>
>>>>> Simple! :) The Orange VRF extends from A to C, while the Shared VRF
>>>>> extends from C to D. OSPF is running in each VRF and I'm using MP-BGP
>>>>> to redistribute the routes so that sources and receivers can reach
>>>>> each other. Unicast reachability is working.
>>>>>
>>>>> I have PIM Spare Mode configured end-to-end. I have an IGMP join
>>>>> configured on a loopback interface on A, along with IGMPv3 so the SSM
>>>>> join works. I can see valid mroutes on A, B and C, and from D to C.
>>>>> I'm sourcing an extended ping from D to C to simulate a source. It is
>>>>> sending to a 232/8 address. I have pim ssm default configured
>>>>> everywhere, including on the VRFs on C.
>>>>>
>>>>> The problem now appears to be that I can't get PIM to talk across the
>>>>> VRF boundary. I've been reading up on this and I don't understand how
>>>>> to solve this problem. When reading about multicast VPNs I see that
>>>>> MDT is used, but I don't understand how it works. It is pure
>>>>> conjecture on my part that adding the correct MDT configuration might
>>>>> create a tunnel between the two VRFs for multicast.
>>>>>
>>>>> Am I on the right track? Any thoughts?
>>>>>
>>>>> Thanks!
>>>>> John
>>>>>
>>>>> P.S. Here is the config on C (called R4 in the config). I've been
>>>>> playing around with the mdt commands even though I don't really
>>>>> understand them yet. I was hoping to get lucky. :)
>>>>>
>>>>>
>>>>> version 12.4
>>>>> service timestamps debug datetime msec
>>>>> service timestamps log datetime msec
>>>>> no service password-encryption
>>>>> !
>>>>> hostname R4
>>>>> !
>>>>> boot-start-marker
>>>>> boot-end-marker
>>>>> !
>>>>> !
>>>>> no aaa new-model
>>>>> memory-size iomem 5
>>>>> ip cef
>>>>> !
>>>>> !
>>>>> !
>>>>> !
>>>>> ip vrf Blue
>>>>> rd 65000:2
>>>>> route-target export 65000:2
>>>>> route-target import 65000:3
>>>>> !
>>>>> ip vrf Orange
>>>>> rd 65000:1
>>>>> route-target export 65000:1
>>>>> route-target import 65000:3
>>>>> mdt default 232.1.1.2
>>>>> mdt data 232.1.3.0 0.0.0.255
>>>>> !
>>>>> ip vrf Shared
>>>>> rd 65000:3
>>>>> route-target export 65000:3
>>>>> route-target import 65000:1
>>>>> route-target import 65000:2
>>>>> mdt default 232.1.1.1
>>>>> mdt data 232.1.2.0 0.0.0.255
>>>>> !
>>>>> no ip domain lookup
>>>>> ip multicast-routing vrf Orange
>>>>> ip multicast-routing vrf Shared
>>>>> !
>>>>> multilink bundle-name authenticated
>>>>> !
>>>>> !
>>>>> interface FastEthernet0/0
>>>>> no ip address
>>>>> speed 100
>>>>> full-duplex
>>>>> !
>>>>> interface FastEthernet0/0.1
>>>>> encapsulation dot1Q 10
>>>>> ip vrf forwarding Orange
>>>>> ip address 10.1.34.2 255.255.255.0
>>>>> ip pim sparse-mode
>>>>> !
>>>>> interface FastEthernet0/0.2
>>>>> encapsulation dot1Q 20
>>>>> ip vrf forwarding Blue
>>>>> ip address 10.2.34.2 255.255.255.0
>>>>> !
>>>>> interface FastEthernet0/1
>>>>> ip vrf forwarding Orange
>>>>> ip address 10.1.45.1 255.255.255.0
>>>>> duplex auto
>>>>> speed auto
>>>>> !
>>>>> interface FastEthernet1/0
>>>>> ip vrf forwarding Shared
>>>>> ip address 10.1.46.1 255.255.255.0
>>>>> ip pim sparse-mode
>>>>> ip igmp version 3
>>>>> duplex auto
>>>>> speed auto
>>>>> !
>>>>> router ospf 1 vrf Orange
>>>>> log-adjacency-changes
>>>>> redistribute bgp 65000 subnets
>>>>> network 10.1.34.0 0.0.0.255 area 0
>>>>> network 10.1.45.0 0.0.0.255 area 0
>>>>> !
>>>>> router ospf 2 vrf Blue
>>>>> log-adjacency-changes
>>>>> redistribute bgp 65000 subnets
>>>>> network 10.2.34.0 0.0.0.255 area 0
>>>>> !
>>>>> router ospf 3 vrf Shared
>>>>> log-adjacency-changes
>>>>> redistribute bgp 65000 subnets
>>>>> network 10.1.46.1 0.0.0.0 area 0
>>>>> !
>>>>> router bgp 65000
>>>>> no synchronization
>>>>> bgp router-id 10.1.45.1
>>>>> bgp log-neighbor-changes
>>>>> no auto-summary
>>>>> !
>>>>> address-family ipv4 vrf Shared
>>>>> redistribute connected
>>>>> redistribute ospf 3 vrf Shared
>>>>> no synchronization
>>>>> exit-address-family
>>>>> !
>>>>> address-family ipv4 vrf Orange
>>>>> redistribute connected
>>>>> redistribute ospf 1 vrf Orange
>>>>> no synchronization
>>>>> exit-address-family
>>>>> !
>>>>> address-family ipv4 vrf Blue
>>>>> redistribute connected
>>>>> redistribute ospf 2 vrf Blue
>>>>> no synchronization
>>>>> exit-address-family
>>>>> !
>>>>> ip forward-protocol nd
>>>>> !
>>>>> !
>>>>> no ip http server
>>>>> no ip http secure-server
>>>>> ip pim ssm default
>>>>> ip pim vrf Orange ssm default
>>>>> ip pim vrf Shared ssm default
>>>>> !
>>>>> i
>>>>>
>>>>>
>>>>> Blogs and organic groups at http://www.ccie.net
>>>>>
>>>>> _______________________________________________________________________
>>>>> Subscription information may be found at:
>>>>> http://www.groupstudy.com/list/CCIELab.html
>>>
>>>
>>> Blogs and organic groups at http://www.ccie.net
>>>
>>> _______________________________________________________________________
>>> Subscription information may be found at:
>>> http://www.groupstudy.com/list/CCIELab.html

Blogs and organic groups at http://www.ccie.net
Received on Tue Mar 08 2011 - 20:42:16 ART

This archive was generated by hypermail 2.2.0 : Fri Apr 01 2011 - 06:35:41 ART