qpimd-users
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [qpimd-users] Unable to route multicast video streams using qpimd.


From: Everton Marques
Subject: Re: [qpimd-users] Unable to route multicast video streams using qpimd.
Date: Fri, 27 Nov 2009 11:51:43 -0200

Hi Yoda,

I suspect the same kind of byte ordering bug may be affecting the IGMP
code as well. I only verified the PIM code.
I will look at the IGMP byte ordering code as soon as I can, in a few days.
Thanks a lot for confirming the PIM byte ordering seems good now.

Cheers,
Everton

On Thu, Nov 26, 2009 at 6:14 AM, Yoda geek <address@hidden> wrote:
> Hi Everton,
>
> Thanks for the bug fix. We're currently running the latest code from the git
> repository. I notice that each of the nodes recognizes the other one as a
> neighbor and the "rfail" counter for "show ip pim hello" is 0 - which is
> much better behavior than before.
>
> However still unable to pass multicast traffic. The command "show ip igmp
> sources" on each node returns nothing. The command "show ip igmp groups" on
> node 2 lists "239.255.255.250" as a group - which is good. However the same
> command on node1 returns nothing.
>
> Is there some configuration missing here ?
>
> The network setup and the configuration files are same as stated in the top
> of the thread before. Any hints or help will be highly appreciated.
>
> Regards,
> Pravin
>
> On Mon, Nov 23, 2009 at 8:28 AM, Everton Marques <address@hidden>
> wrote:
>>
>> Hi Yoda,
>>
>> Just to let you know, I think you spotted a byte-ordering bug in qpimd
>> while converting 32-bit values from host to network.
>>
>> I suppose you are running on a 64-bit cpu?
>>
>> I think it is fixed by now in the git repository, but I was unable to
>> spare
>> time for testing it. I hope to be able to test it properly by the next
>> week.
>>
>> Cheers,
>> Everton
>>
>>
>> On Thu, Nov 19, 2009 at 10:36 AM, Everton Marques
>> <address@hidden> wrote:
>> > Yoda,
>> >
>> > I am looking at this.
>> >
>> > Thanks a lot,
>> > Everton
>> >
>> > On Tue, Nov 17, 2009 at 5:23 AM, Yoda geek <address@hidden>
>> > wrote:
>> >> Hi Everton,
>> >>
>> >> Seems like the PIM packet options length exceeds what is permitted
>> >> according
>> >> to the code and the error log. The pimd.log is full of the following
>> >> messages:
>> >>
>> >>
>> >> address@hidden:~# tail -f /usr/local/logs/pimd.log
>> >>
>> >> 2009/08/24 04:02:54.996722 warnings: PIM: pim_hello_recv: long PIM
>> >> hello TLV
>> >> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
>> >>
>> >> 2009/08/24 04:02:55.001093 warnings: PIM: pim_hello_recv: long PIM
>> >> hello TLV
>> >> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
>> >>
>> >> 2009/08/24 04:03:24.996542 warnings: PIM: pim_hello_recv: long PIM
>> >> hello TLV
>> >> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_ap0
>> >>
>> >> 2009/08/24 04:03:25.001374 warnings: PIM: pim_hello_recv: long PIM
>> >> hello TLV
>> >> type=20 length=43780 > max=4 from 192.168.3.10 on interface ra_sta0
>> >>
>> >> Also - Below are the contents of the PIMV2 packet captured by
>> >> wireshark.
>> >> Please note that I have stripped off the IP and other headers. Just the
>> >> PIM
>> >> protocol packet and the offsets from wireshark:
>> >>
>> >> 0000   20 00 f6 56 00 01 00 02 00 69 00 02 00 04 01 f4
>> >> 0010   09 c4 00 13 00 04 00 01 00 00 00 14 ab 04 32 4e
>> >> 0020   00 00
>> >>
>> >> Thanks again for all your help.
>> >>
>> >> Regards,
>> >>
>> >> Yoda
>> >>
>> >>
>> >> On Mon, Nov 16, 2009 at 3:28 AM, Everton Marques
>> >> <address@hidden>
>> >> wrote:
>> >>>
>> >>> Hi Yoda,
>> >>>
>> >>> Thanks.
>> >>>
>> >>> Yes, I am looking for the reason why the Rfail counter is increasing.
>> >>>
>> >>> When PIM_CHECK_RECV_IFINDEX_SANITY is defined in pimd/Makefile.am,
>> >>> Rfail may increment silently. However, now you undefined
>> >>> PIM_CHECK_RECV_IFINDEX_SANITY, any increment in Rfail should
>> >>> have a related log message.
>> >>>
>> >>> Can you see if you locate any meaningful message in your pimd logs?
>> >>>
>> >>> If you send me your pimd logs I can try to find something as well.
>> >>>
>> >>> Thanks a lot,
>> >>> Everton
>> >>>
>> >>>
>> >>>
>> >>> On Sun, Nov 15, 2009 at 5:23 AM, Yoda geek <address@hidden>
>> >>> wrote:
>> >>> > Hi Everton,
>> >>> >
>> >>> > I followed the exact directions as you suggested and ran the rebuilt
>> >>> > quagga
>> >>> > on the nodes. However I don't see any difference in behavior. Is
>> >>> > there
>> >>> > anything in particular you're looking for after these changes ?
>> >>> > Below is the output from pimd running on both nodes:
>> >>> >
>> >>> >
>> >>> > Trying 192.168.1.1...
>> >>> >
>> >>> > Connected to 192.168.1.1.
>> >>> >
>> >>> > Escape character is '^]'.
>> >>> >
>> >>> > Hello, this is Quagga 0.99.15 pimd 0.158
>> >>> >
>> >>> > Copyright 1996-2005 Kunihiro Ishiguro, et al.
>> >>> >
>> >>> >
>> >>> >
>> >>> > User Access Verification
>> >>> >
>> >>> > Password:
>> >>> >
>> >>> > node1> enable
>> >>> >
>> >>> > Password:
>> >>> >
>> >>> > node1# show ip pim neighbor
>> >>> >
>> >>> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
>> >>> > G=generation_id
>> >>> > A=address_list
>> >>> >
>> >>> > T=can_disable_join_suppression
>> >>> >
>> >>> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
>> >>> >
>> >>> > node1# show ip pim hello
>> >>> >
>> >>> > Interface Address Period Timer StatStart Recv Rfail Send Sfail
>> >>> >
>> >>> > ra_ap0 192.168.4.20 00:30 00:16 00:10:19 20 20 21 0
>> >>> >
>> >>> > ra_sta0 192.168.3.20 00:30 00:14 00:10:19 20 20 21 0
>> >>> >
>> >>> > node1# q
>> >>> >
>> >>> > Connection closed by foreign host.
>> >>> >
>> >>> > Trying 192.168.3.10...
>> >>> >
>> >>> > Connected to 192.168.3.10.
>> >>> >
>> >>> > Escape character is '^]'.
>> >>> >
>> >>> > Hello, this is Quagga 0.99.15 pimd 0.158
>> >>> >
>> >>> > Copyright 1996-2005 Kunihiro Ishiguro, et al.
>> >>> >
>> >>> >
>> >>> >
>> >>> > User Access Verification
>> >>> >
>> >>> > Password:
>> >>> >
>> >>> > node2> enable
>> >>> >
>> >>> > Password:
>> >>> >
>> >>> > node2# show ip pim neighbor
>> >>> >
>> >>> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
>> >>> > G=generation_id
>> >>> > A=address_list
>> >>> >
>> >>> > T=can_disable_join_suppression
>> >>> >
>> >>> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
>> >>> >
>> >>> > node2# show ip pim hello
>> >>> >
>> >>> > Interface Address Period Timer StatStart Recv Rfail Send Sfail
>> >>> >
>> >>> > ra_ap0 192.168.5.10 00:30 00:08 00:11:26 23 23 23 0
>> >>> >
>> >>> > ra_sta0 192.168.3.10 00:30 00:05 00:11:26 23 23 23 0
>> >>> >
>> >>> > node2# q
>> >>> >
>> >>> > Connection closed by foreign host.
>> >>> >
>> >>> >
>> >>> >
>> >>> > Thanks,
>> >>> >
>> >>> > Yoda
>> >>> >
>> >>> >
>> >>> > On Fri, Nov 13, 2009 at 4:16 AM, Everton Marques
>> >>> > <address@hidden>
>> >>> > wrote:
>> >>> >>
>> >>> >> Hi Yoda,
>> >>> >>
>> >>> >> Based on the Rfail counter you spotted, I suspect the code under
>> >>> >> PIM_CHECK_RECV_IFINDEX_SANITY may be discarding hello packets.
>> >>> >>
>> >>> >> Can you experiment with commenting out the following line:
>> >>> >>
>> >>> >> PIM_DEFS += -DPIM_CHECK_RECV_IFINDEX_SANITY
>> >>> >>
>> >>> >> from pimd/Makefile.am ?
>> >>> >>
>> >>> >> Then you will need to bootstrap autotools with:
>> >>> >>
>> >>> >> autoreconf -i --force
>> >>> >>
>> >>> >> And finally to rebuild quagga.
>> >>> >>
>> >>> >> I know this test may be cumbersome since it requires the whole
>> >>> >> autotools
>> >>> >> suit present on your system, but it could help to identify why pimd
>> >>> >> is
>> >>> >> missing the hello packets.
>> >>> >>
>> >>> >> Thanks,
>> >>> >> Everton
>> >>> >>
>> >>> >>
>> >>> >> On Fri, Nov 13, 2009 at 7:30 AM, Yoda geek
>> >>> >> <address@hidden>
>> >>> >> wrote:
>> >>> >> > Hi Everton,
>> >>> >> >
>> >>> >> > Below are the answers :
>> >>> >> >
>> >>> >> > 1) "ip pim ssm" is enabled on node1 ra_sta0 as well as node2
>> >>> >> > ra_sta0.
>> >>> >> >
>> >>> >> > 2) I do see in wireshark trace that ra_sta0 on both nodes 1 and 2
>> >>> >> > are
>> >>> >> > receiving PIMv2 "Hello" packets however they are addressed to
>> >>> >> > 224.0.0.13.
>> >>> >> >
>> >>> >> > 3) Don't see any error logs on nodes 1 and 2. Below is the output
>> >>> >> > of
>> >>> >> > "show
>> >>> >> > ip pim hello" on both nodes 1 and 2. Please notice the "Rfail"
>> >>> >> > counters.
>> >>> >> >
>> >>> >> > node1# show ip pim hello
>> >>> >> > Interface Address         Period Timer StatStart Recv Rfail Send
>> >>> >> > Sfail
>> >>> >> > ra_ap0    192.168.4.20     00:30 00:05  29:57:50    0  3496 3595
>> >>> >> > 0
>> >>> >> > ra_sta0   192.168.3.20     00:30 00:04  29:57:50 3496  3496 3595
>> >>> >> > 0
>> >>> >> > node1#
>> >>> >> >
>> >>> >> > node2# show ip pim hello
>> >>> >> > Interface Address         Period Timer StatStart Recv Rfail Send
>> >>> >> > Sfail
>> >>> >> > ra_ap0    192.168.5.10     00:30 00:04  29:56:48    0  3590 3593
>> >>> >> > 0
>> >>> >> > ra_sta0   192.168.3.10     00:30 00:07  29:56:48 3590  3590 3593
>> >>> >> > 0
>> >>> >> > node2#
>> >>> >> >
>> >>> >> >
>> >>> >> > Thanks,
>> >>> >> >
>> >>> >> > On Wed, Nov 11, 2009 at 6:04 AM, Everton Marques
>> >>> >> > <address@hidden>
>> >>> >> > wrote:
>> >>> >> >>
>> >>> >> >> Hi,
>> >>> >> >>
>> >>> >> >> I think the problem is node2 fails to bring up the node1 as pim
>> >>> >> >> neighbor
>> >>> >> >> on ra_sta0, since node1 is missing from node2 "show ip pim
>> >>> >> >> neighbor".
>> >>> >> >>
>> >>> >> >> Can you please double check the following?
>> >>> >> >>
>> >>> >> >> 1) "ip pim ssm" is enabled on node1 ra_sta0 ?
>> >>> >> >> 2) node2 is receiving pim hello packets from node1 on ra_sta0 ?
>> >>> >> >> 3) node2 pimd is logging any error/warning ? look for messages
>> >>> >> >> about
>> >>> >> >> packets from node1, specially hello packets.
>> >>> >> >>
>> >>> >> >> Thanks,
>> >>> >> >> Everton
>> >>> >> >>
>> >>> >> >> On Wed, Nov 11, 2009 at 4:48 AM, Yoda geek
>> >>> >> >> <address@hidden>
>> >>> >> >> wrote:
>> >>> >> >> > Below is the output as requested
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >> > User Access Verification
>> >>> >> >> >
>> >>> >> >> > Password:
>> >>> >> >> >
>> >>> >> >> > node2> enable
>> >>> >> >> >
>> >>> >> >> > Password:
>> >>> >> >> >
>> >>> >> >> > node2# show ip igmp interface
>> >>> >> >> >
>> >>> >> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop
>> >>> >> >> > AllMu
>> >>> >> >> > Prmsc
>> >>> >> >> > Del
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 5 9 00:34:40 yes yes yes no no no
>> >>> >> >> >
>> >>> >> >> > node2# show ip igmp interface group
>> >>> >> >> >
>> >>> >> >> > Interface Address Group Mode Timer Srcs V Uptime
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 224.0.0.13 EXCL 00:03:55 0 3 00:34:48
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 224.0.0.22 EXCL 00:03:55 0 3 00:34:48
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 239.255.255.250 EXCL 00:03:59 0 3 00:02:17
>> >>> >> >> >
>> >>> >> >> > node2# show ip igmp group sources
>> >>> >> >> >
>> >>> >> >> > Interface Address Group Source Timer Fwd Uptime
>> >>> >> >> >
>> >>> >> >> > node2# show ip igmp sources pim designated-router
>> >>> >> >> >
>> >>> >> >> > NonPri: Number of neighbors missing DR Priority hello option
>> >>> >> >> >
>> >>> >> >> > Interface Address DR Uptime Elections NonPri
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 192.168.5.10 00:35:16 1 0
>> >>> >> >> >
>> >>> >> >> > ra_sta0 192.168.3.10 192.168.3.10 00:35:16 1 0
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim designated-router hello
>> >>> >> >> >
>> >>> >> >> > Interface Address Period Timer StatStart Recv Rfail Send Sfail
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 00:30 00:08 00:35:23 0 70 71 0
>> >>> >> >> >
>> >>> >> >> > ra_sta0 192.168.3.10 00:30 00:10 00:35:23 70 70 71 0
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim hello interface
>> >>> >> >> >
>> >>> >> >> > Interface Address ifIndex Socket Uptime Multi Broad MLoop
>> >>> >> >> > AllMu
>> >>> >> >> > Prmsc
>> >>> >> >> > Del
>> >>> >> >> >
>> >>> >> >> > ra_ap0 192.168.5.10 5 10 00:35:30 yes yes no no no no
>> >>> >> >> >
>> >>> >> >> > ra_sta0 192.168.3.10 6 11 00:35:30 yes yes no no no no
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim interface local-membership
>> >>> >> >> >
>> >>> >> >> > Interface Address Source Group Membership
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim local-membership join
>> >>> >> >> >
>> >>> >> >> > Interface Address Source Group State Uptime Expire Prune
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim join neighbor
>> >>> >> >> >
>> >>> >> >> > Recv flags: H=holdtime L=lan_prune_delay P=dr_priority
>> >>> >> >> > G=generation_id
>> >>> >> >> > A=address_list
>> >>> >> >> >
>> >>> >> >> > T=can_disable_join_suppression
>> >>> >> >> >
>> >>> >> >> > Interface Address Neighbor Uptime Timer Holdt DrPri GenId Recv
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim neighbor rpf
>> >>> >> >> >
>> >>> >> >> > RPF Cache Refresh Delay: 10000 msecs
>> >>> >> >> >
>> >>> >> >> > RPF Cache Refresh Timer: 0 msecs
>> >>> >> >> >
>> >>> >> >> > RPF Cache Refresh Requests: 6
>> >>> >> >> >
>> >>> >> >> > RPF Cache Refresh Events: 3
>> >>> >> >> >
>> >>> >> >> > RPF Cache Refresh Last: 00:34:24
>> >>> >> >> >
>> >>> >> >> > Source Group RpfIface RpfAddress RibNextHop Metric Pref
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim rpf upstream
>> >>> >> >> >
>> >>> >> >> > Source Group State Uptime JoinTimer RefCnt
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim upstream-join-desired
>> >>> >> >> >
>> >>> >> >> > Interface Source Group LostAssert Joins PimInclude JoinDesired
>> >>> >> >> > EvalJD
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim upstream-join-desired rpf
>> >>> >> >> >
>> >>> >> >> > Source Group RpfIface RibNextHop RpfAddress
>> >>> >> >> >
>> >>> >> >> > node2# show ip pim upstream-rpf route 192.168.4.60
>> >>> >> >> >
>> >>> >> >> > Address NextHop Interface Metric Preference
>> >>> >> >> >
>> >>> >> >> > 192.168.4.60 192.168.3.20 ra_sta0 1 0
>> >>> >> >> >
>> >>> >> >> > node2# q
>> >>> >> >> >
>> >>> >> >> > On Tue, Nov 3, 2009 at 7:51 AM, Everton Marques
>> >>> >> >> > <address@hidden>
>> >>> >> >> > wrote:
>> >>> >> >> >>
>> >>> >> >> >> Hi,
>> >>> >> >> >>
>> >>> >> >> >> Can you send the following commands from node2 ?
>> >>> >> >> >>
>> >>> >> >> >> show ip igmp interface
>> >>> >> >> >> show ip igmp group
>> >>> >> >> >> show ip igmp sources
>> >>> >> >> >> show ip pim designated-router
>> >>> >> >> >> show ip pim hello
>> >>> >> >> >> show ip pim interface
>> >>> >> >> >> show ip pim local-membership
>> >>> >> >> >> show ip pim join
>> >>> >> >> >> show ip pim neighbor
>> >>> >> >> >> show ip pim rpf
>> >>> >> >> >> show ip pim upstream
>> >>> >> >> >> show ip pim upstream-join-desired
>> >>> >> >> >> show ip pim upstream-rpf
>> >>> >> >> >> show ip route 192.168.4.60
>> >>> >> >> >>
>> >>> >> >> >> Thanks,
>> >>> >> >> >> Everton
>> >>> >> >> >>
>> >>> >> >> >> On Mon, Nov 2, 2009 at 5:44 AM, Yoda geek
>> >>> >> >> >> <address@hidden>
>> >>> >> >> >> wrote:
>> >>> >> >> >> > Hi Everton,
>> >>> >> >> >> >
>> >>> >> >> >> > I added the entry "ip pim ssm" on ra_ap0  as you suggested.
>> >>> >> >> >> > I
>> >>> >> >> >> > still
>> >>> >> >> >> > don't
>> >>> >> >> >> > see join request coming into the source. Below is what the
>> >>> >> >> >> > configuration
>> >>> >> >> >> > looks like on the individual nodes:
>> >>> >> >> >> >
>> >>> >> >> >> > Node 1 pimd.conf
>> >>> >> >> >> > -------------------------
>> >>> >> >> >> > !
>> >>> >> >> >> > ! Zebra configuration saved from vty
>> >>> >> >> >> > ! 2009/08/08 05:03:23
>> >>> >> >> >> > !
>> >>> >> >> >> > hostname node1
>> >>> >> >> >> > password zebra
>> >>> >> >> >> > enable password zebra
>> >>> >> >> >> > log stdout
>> >>> >> >> >> > !
>> >>> >> >> >> > interface eth0
>> >>> >> >> >> > !
>> >>> >> >> >> > interface eth1
>> >>> >> >> >> > !
>> >>> >> >> >> > interface lo
>> >>> >> >> >> > !
>> >>> >> >> >> > interface ra_ap0
>> >>> >> >> >> > ip pim ssm
>> >>> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> > !
>> >>> >> >> >> > interface ra_sta0
>> >>> >> >> >> > ip pim ssm
>> >>> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> > !
>> >>> >> >> >> > !
>> >>> >> >> >> > ip multicast-routing
>> >>> >> >> >> > !
>> >>> >> >> >> > line vty
>> >>> >> >> >> > !
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> > Node 2 pimd.conf
>> >>> >> >> >> > -------------------------
>> >>> >> >> >> > !
>> >>> >> >> >> > ! Zebra configuration saved from vty
>> >>> >> >> >> > ! 2009/08/09 22:38:12
>> >>> >> >> >> > !
>> >>> >> >> >> > hostname node2
>> >>> >> >> >> > password zebra
>> >>> >> >> >> > enable password zebra
>> >>> >> >> >> > log stdout
>> >>> >> >> >> > !
>> >>> >> >> >> > interface br-lan
>> >>> >> >> >> > !
>> >>> >> >> >> > interface eth0
>> >>> >> >> >> > !
>> >>> >> >> >> > interface eth1
>> >>> >> >> >> > !
>> >>> >> >> >> > interface lo
>> >>> >> >> >> > !
>> >>> >> >> >> > interface ra_ap0
>> >>> >> >> >> > ip pim ssm
>> >>> >> >> >> > ip igmp
>> >>> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
>> >>> >> >> >> > !
>> >>> >> >> >> > interface ra_sta0
>> >>> >> >> >> > ip pim ssm
>> >>> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> > !
>> >>> >> >> >> > !
>> >>> >> >> >> > ip multicast-routing
>> >>> >> >> >> > !
>> >>> >> >> >> > line vty
>> >>> >> >> >> > !
>> >>> >> >> >> > On Sun, Nov 1, 2009 at 12:44 PM, Everton Marques
>> >>> >> >> >> > <address@hidden>
>> >>> >> >> >> > wrote:
>> >>> >> >> >> >>
>> >>> >> >> >> >> Hi,
>> >>> >> >> >> >>
>> >>> >> >> >> >> Yes, pimd should route the join request towards the
>> >>> >> >> >> >> source.
>> >>> >> >> >> >>
>> >>> >> >> >> >> However, you need to enable "ip pim ssm" on ra_ap0 as
>> >>> >> >> >> >> well.
>> >>> >> >> >> >> If you enable only "ip igmp" on a interface, pimd won't
>> >>> >> >> >> >> inject
>> >>> >> >> >> >> IGMP-learnt membership into the pim protocol.
>> >>> >> >> >> >>
>> >>> >> >> >> >> Cheers,
>> >>> >> >> >> >> Everton
>> >>> >> >> >> >>
>> >>> >> >> >> >> On Sun, Nov 1, 2009 at 7:02 AM, Yoda geek
>> >>> >> >> >> >> <address@hidden>
>> >>> >> >> >> >> wrote:
>> >>> >> >> >> >> > Hi Everton,
>> >>> >> >> >> >> >
>> >>> >> >> >> >> > Thanks for the suggestions. I made the changes to the
>> >>> >> >> >> >> > config
>> >>> >> >> >> >> > files
>> >>> >> >> >> >> > on
>> >>> >> >> >> >> > both
>> >>> >> >> >> >> > nodes as you suggested. Since it is not possible for me
>> >>> >> >> >> >> > to
>> >>> >> >> >> >> > force
>> >>> >> >> >> >> > the
>> >>> >> >> >> >> > client
>> >>> >> >> >> >> > to do a source specific join I added the following line
>> >>> >> >> >> >> > at
>> >>> >> >> >> >> > interface
>> >>> >> >> >> >> > ra_ap0
>> >>> >> >> >> >> > on node 2 where the client is attached:
>> >>> >> >> >> >> >
>> >>> >> >> >> >> > interface ra_ap0
>> >>> >> >> >> >> > ip igmp
>> >>> >> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
>> >>> >> >> >> >> >
>> >>> >> >> >> >> > I do see the source-specific IGMPv3 join group
>> >>> >> >> >> >> > 239.255.255.250
>> >>> >> >> >> >> > for
>> >>> >> >> >> >> > source
>> >>> >> >> >> >> > 192.168.4.60 which is addressed to 224.0.0.22 on the
>> >>> >> >> >> >> > side of
>> >>> >> >> >> >> > node2.
>> >>> >> >> >> >> > However
>> >>> >> >> >> >> > this join request never makes it to node 1 where the
>> >>> >> >> >> >> > source
>> >>> >> >> >> >> > is
>> >>> >> >> >> >> > located
>> >>> >> >> >> >> > on
>> >>> >> >> >> >> > ra_ap0.
>> >>> >> >> >> >> > Shouldn't the pimd route this join request to the node
>> >>> >> >> >> >> > where
>> >>> >> >> >> >> > the
>> >>> >> >> >> >> > source
>> >>> >> >> >> >> > is
>> >>> >> >> >> >> > attached ?
>> >>> >> >> >> >> >
>> >>> >> >> >> >> > Thanks,
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> > On Mon, Oct 26, 2009 at 6:44 AM, Everton Marques
>> >>> >> >> >> >> > <address@hidden>
>> >>> >> >> >> >> > wrote:
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> Hi,
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> You did not mention whether you got a source-specific
>> >>> >> >> >> >> >> IGMPv3
>> >>> >> >> >> >> >> join
>> >>> >> >> >> >> >> to
>> >>> >> >> >> >> >> the
>> >>> >> >> >> >> >> channel (S,G)=(192.168.4.60,239.255.255.250). Please
>> >>> >> >> >> >> >> notice
>> >>> >> >> >> >> >> qpimd
>> >>> >> >> >> >> >> is
>> >>> >> >> >> >> >> unable to program the multicast forwarding cache with
>> >>> >> >> >> >> >> non-source-specific
>> >>> >> >> >> >> >> groups. Usually the key issue is to instruct the
>> >>> >> >> >> >> >> receiver
>> >>> >> >> >> >> >> application
>> >>> >> >> >> >> >> to
>> >>> >> >> >> >> >> join the source-specific channel (S,G).
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> Regarding the config, the basic rule is:
>> >>> >> >> >> >> >> 1) Enable "ip pim ssm" everywhere (on every interface
>> >>> >> >> >> >> >> that
>> >>> >> >> >> >> >> should
>> >>> >> >> >> >> >> pass
>> >>> >> >> >> >> >> mcast).
>> >>> >> >> >> >> >> 2) Enable both "ip pim ssm" and "ip igmp" on interfaces
>> >>> >> >> >> >> >> attached
>> >>> >> >> >> >> >> to
>> >>> >> >> >> >> >> the receivers (IGMPv3 hosts).
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> An even simpler config rule to remember is to enable
>> >>> >> >> >> >> >> both
>> >>> >> >> >> >> >> commands
>> >>> >> >> >> >> >> everywhere. They should not cause any harm.
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> Hence, if your mcast receiver is attached to Node 2 at
>> >>> >> >> >> >> >>  ra_ap0, I
>> >>> >> >> >> >> >> think
>> >>> >> >> >> >> >> you will
>> >>> >> >> >> >> >> need at least the following config:
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> !
>> >>> >> >> >> >> >> ! Node 1
>> >>> >> >> >> >> >> !
>> >>> >> >> >> >> >> interface ra_ap0
>> >>> >> >> >> >> >>  ip pim ssm
>> >>> >> >> >> >> >> interface ra_sta0
>> >>> >> >> >> >> >>  ip pim ssm
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> !
>> >>> >> >> >> >> >> ! Node 2
>> >>> >> >> >> >> >> !
>> >>> >> >> >> >> >> interface ra_ap0
>> >>> >> >> >> >> >>  ip pim ssm
>> >>> >> >> >> >> >>  ip igmp
>> >>> >> >> >> >> >> interface ra_sta0
>> >>> >> >> >> >> >>  ip pim ssm
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> Hope this helps,
>> >>> >> >> >> >> >> Everton
>> >>> >> >> >> >> >>
>> >>> >> >> >> >> >> On Mon, Oct 26, 2009 at 4:42 AM, Yoda geek
>> >>> >> >> >> >> >> <address@hidden>
>> >>> >> >> >> >> >> wrote:
>> >>> >> >> >> >> >> > Hi Everton & Fellow  qpimd users,
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > We're trying to stream multicast video traffic
>> >>> >> >> >> >> >> > between a
>> >>> >> >> >> >> >> > Tversity
>> >>> >> >> >> >> >> > server
>> >>> >> >> >> >> >> > and
>> >>> >> >> >> >> >> > a multicast client separated by 2 nodes (node1 and
>> >>> >> >> >> >> >> > node2).
>> >>> >> >> >> >> >> > Each
>> >>> >> >> >> >> >> > node
>> >>> >> >> >> >> >> > is
>> >>> >> >> >> >> >> > running quagga suite (version 0.99.15) along with
>> >>> >> >> >> >> >> > qpimd
>> >>> >> >> >> >> >> > (version
>> >>> >> >> >> >> >> > 0.158)
>> >>> >> >> >> >> >> > running on top of Linux 2.6.26.
>> >>> >> >> >> >> >> > Node 1 has 3 network interfaces - eth0, ap0 and
>> >>> >> >> >> >> >> > ra_sta0
>> >>> >> >> >> >> >> > Node 2 has 2 network interfaces - ra_sta0 and ra_ap0
>> >>> >> >> >> >> >> > The Tversity server talks to interface ra_ap0 on Node
>> >>> >> >> >> >> >> > 1
>> >>> >> >> >> >> >> > and
>> >>> >> >> >> >> >> > the
>> >>> >> >> >> >> >> > multicast
>> >>> >> >> >> >> >> > client talks to interface ra_ap0 on Node 2
>> >>> >> >> >> >> >> > Nodes 1 and 2 talk with each other over their ra_sta0
>> >>> >> >> >> >> >> > interfaces
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Below is a graphical depiction :
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Tversity server   -----------ra_ap0--> Node 1
>> >>> >> >> >> >> >> > --ra_sta0-----------------ra_sta0-->Node
>> >>> >> >> >> >> >> > 2-----ra_ap0------------------------> Video Client
>> >>> >> >> >> >> >> > ===========             ======================
>> >>> >> >> >> >> >> > ======================
>> >>> >> >> >> >> >> > =============
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Node 1 pimd.conf file
>> >>> >> >> >> >> >> > ==================
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > ! Zebra configuration saved from vty
>> >>> >> >> >> >> >> > ! 2009/08/01 20:26:06
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > hostname node1
>> >>> >> >> >> >> >> > password zebra
>> >>> >> >> >> >> >> > enable password zebra
>> >>> >> >> >> >> >> > log stdout
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface eth0
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface eth1
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface lo
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface ra_ap0
>> >>> >> >> >> >> >> > ip pim ssm
>> >>> >> >> >> >> >> > ip igmp
>> >>> >> >> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface ra_sta0
>> >>> >> >> >> >> >> > ip igmp
>> >>> >> >> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > ip multicast-routing
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > line vty
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Node 2 pimd.conf configuration file
>> >>> >> >> >> >> >> > ============================
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > ! Zebra configuration saved from vty
>> >>> >> >> >> >> >> > ! 2009/08/02 21:54:14
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > hostname node2
>> >>> >> >> >> >> >> > password zebra
>> >>> >> >> >> >> >> > enable password zebra
>> >>> >> >> >> >> >> > log stdout
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface eth0
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface eth1
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface lo
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface ra_ap0
>> >>> >> >> >> >> >> > ip igmp
>> >>> >> >> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> >> >> > ip igmp join 239.255.255.250 192.168.4.60
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > interface ra_sta0
>> >>> >> >> >> >> >> > ip igmp
>> >>> >> >> >> >> >> > ip igmp query-interval 125
>> >>> >> >> >> >> >> > ip igmp query-max-response-time-dsec 100
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > ip multicast-routing
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> > line vty
>> >>> >> >> >> >> >> > !
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > From the above configuration you can see that
>> >>> >> >> >> >> >> > interface
>> >>> >> >> >> >> >> > ra_ap0
>> >>> >> >> >> >> >> > on
>> >>> >> >> >> >> >> > node 1
>> >>> >> >> >> >> >> > is
>> >>> >> >> >> >> >> > configured to be multicast source (ip pim ssm).
>> >>> >> >> >> >> >> > We do see some multicast join requests in wireshark
>> >>> >> >> >> >> >> > from
>> >>> >> >> >> >> >> > both
>> >>> >> >> >> >> >> > the
>> >>> >> >> >> >> >> > server
>> >>> >> >> >> >> >> > and
>> >>> >> >> >> >> >> > the client however no data flow. Initially we started
>> >>> >> >> >> >> >> > qpimd without
>> >>> >> >> >> >> >> > the entry "igmp join ..." on either client side node
>> >>> >> >> >> >> >> > or
>> >>> >> >> >> >> >> > server
>> >>> >> >> >> >> >> > side
>> >>> >> >> >> >> >> > node.
>> >>> >> >> >> >> >> > Looking at node 1 configuration through "show  ip
>> >>> >> >> >> >> >> > igmp
>> >>> >> >> >> >> >> > groups"
>> >>> >> >> >> >> >> > we
>> >>> >> >> >> >> >> > didn't
>> >>> >> >> >> >> >> > see
>> >>> >> >> >> >> >> > the group membership for "239.255.255.250" while this
>> >>> >> >> >> >> >> > group
>> >>> >> >> >> >> >> > membership
>> >>> >> >> >> >> >> > was
>> >>> >> >> >> >> >> > observed on node 2. I put this group membership on
>> >>> >> >> >> >> >> > both
>> >>> >> >> >> >> >> > nodes
>> >>> >> >> >> >> >> > to
>> >>> >> >> >> >> >> > force
>> >>> >> >> >> >> >> > them
>> >>> >> >> >> >> >> > to join this multicast group - however without
>> >>> >> >> >> >> >> > success.
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Just to give you a background - when both client and
>> >>> >> >> >> >> >> > server
>> >>> >> >> >> >> >> > are
>> >>> >> >> >> >> >> > talking
>> >>> >> >> >> >> >> > to
>> >>> >> >> >> >> >> > same node - say node 2 and same interface ra_ap0
>> >>> >> >> >> >> >> > (without
>> >>> >> >> >> >> >> > qpimd
>> >>> >> >> >> >> >> > running)
>> >>> >> >> >> >> >> > multicast video gets served flawlessly from Tversity
>> >>> >> >> >> >> >> > server
>> >>> >> >> >> >> >> > to
>> >>> >> >> >> >> >> > client
>> >>> >> >> >> >> >> > through the node.
>> >>> >> >> >> >> >> > But with the 2 node setup we aren't able to see the
>> >>> >> >> >> >> >> > video
>> >>> >> >> >> >> >> > streams
>> >>> >> >> >> >> >> > go
>> >>> >> >> >> >> >> > through
>> >>> >> >> >> >> >> > to the client.
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Could you please review  the above configuration for
>> >>> >> >> >> >> >> > errors
>> >>> >> >> >> >> >> > or
>> >>> >> >> >> >> >> > have
>> >>> >> >> >> >> >> > any
>> >>> >> >> >> >> >> > suggestions to reseolve this issue ? Any help would
>> >>> >> >> >> >> >> > be
>> >>> >> >> >> >> >> > greatly
>> >>> >> >> >> >> >> > appreciated.
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> > Thanks,
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >> >
>> >>> >> >> >
>> >>> >> >> >
>> >>> >> >
>> >>> >> >
>> >>> >
>> >>> >
>> >>
>> >>
>> >
>
>




reply via email to

[Prev in Thread] Current Thread [Next in Thread]