Friday, November 6, 2015

Skype 4.3.0.37 segmentation fault

Hi,

Today small note about Skype 4.3.0.37 under linux. There are a lot of comments about segmentation fault issue while running Skype. There were some solutions proposed but they did not work for me, unfortunately.

After some researches, I had to edit one file:


Initial version:


$ cat /usr/bin/skype
#!/bin/sh

export LD_PRELOAD=/usr/lib/libGL.so.1
exec skype-bin


After editing:


#!/bin/sh

export LD_PRELOAD=/usr/lib/libGL.so.1.2.0
exec skype-bin

Just in case:

$file /usr/lib/libGL.so.1.2.0

/usr/lib/libGL.so.1.2.0: ELF 32-bit LSB shared object, Intel 80386, version 1 (SYSV), dynamically linked, BuildID[sha1]=0x9037df64c69e76b518f009a00d21271f5e1ec799, stripped



This solution worked for me.

Thursday, August 27, 2015

Part III: Troubleshooting multicast absence traffic

Sometimes our software does not work as we expect. It has to work, but it does not. Re-playing multicast traffic via tcpreplay is not exception in this case.

Common symptom is that you make all the steps from part I and part II but you still can't see the traffic on UDP port. Even more weird fact, that you even could catch network data via tcpdump -i interface host ipaddress or tcpdump -i interface port portNumber. The most terrible sitation when you are not able to google the answer.

Here are some advices which could save time:

1. Check that you interface is setup to listen multicast traffic with ip maddr command. You should see ip address of multicast group you waiting data from.

2. Check that destanation mac address is the same as mac of you network interface (or at least FF:FF:FF:FF:FF:FF i.e broadcast). In my case I had pcap with mac address equals to all zero ( 00:00:00:00:00:00 ) and I managed to read traffic by tcpdump, but not able to see as multicast on UDP port ( that did not work even with promisc mode on ).

3. Pay attention at pktlen parameter of tcpreplay tool ( that did not work in my case, but why not to try )

4. tcprewrite --fixcsum -F pad --infile=input.pcap --outfile=out.pcap ( after that out.pcap could be successfully replayed and catched ).


Hope this help.

Sunday, August 16, 2015

Part II: Send packets on the same computer running tcpreplay

Send packets on the same computer running tcpreplay

Q: Can I send packets on the same computer running tcpreplay?
Generally speaking no. When tcpreplay sends packets, it injects them between the TCP/IP stack of the system and the device driver of the network card. The result is the TCP/IP stack system running tcpreplay never sees the packets.
One suggestion that has been made is using something like VMWare, Parallels or Xen. Running tcpreplay in the virtual machine (guest) would allow packets to be seen by the host operating system.
That's what official documentation says ... In my opinion answer "generally speaking yes" is more optimistic :).

I would like to appreciate help of Denis Pynkin who was able to find practical solution how to resolve the issue and use only one machine to send and receive traffic. Denis thank you!

Prepare environment


In this part we are going to deal with feature called namespaces.
A namespace wraps a global system resource in an abstraction that
       makes it appear to the processes within the namespace that they have
       their own isolated instance of the global resource.  Changes to the
       global resource are visible to other processes that are members of
       the namespace, but are invisible to other processes.  One use of
       namespaces is to implement containers.
You can learn more about namespaces from man page or from serie of articels at lwn: Namespaces in operation, part 1: namespaces overview.

We are going to work with unshare utility. Our goal is to create process with isolated network namespace, to have two "honest" network interfaces to send from and to accept on.

We start with screen shot of my environment with some explanation and than we'll see the steps how to create your own one. First of all there is a tmux session. So left pane is our host environment. Right is "container" environment. Pay attention to the first command in both panes: echo $$: it shows two different PID ( proccess ID ). In right pane I executed the following command : unshare --net bash. (After that we have different bash process with network namespace unshared). Network interfaces ceth1 and ceth0 are manually created with IP addresses assigned. And we can send and receiver ping requests.













If you carefully follow the commands below, you should have similiar environment.
Here is H == host, C == container:
H: ip link add name ceth0  type veth peer name ceth1
H: ip a add 172.18.0.1/24 dev ceth0
H: unshare --net bash
C: echo $$ = PID
H: ip link set ceth1 netns <PID>
C: ip a add 172.18.0.2/24 dev ceth1
C: ip link set dev ceth1 up

At this step we have environment configured and ready to move next ...

 Prepare traffic with tcprewrite


download template.pcap

Before we can play multicast UDP traffic we have to make some preparation of our pcap file.
That's can be done by one command:

tcprewrite --enet-dmac=4e:ae:72:a4:2a:96 --srcipmap=127.0.0.1:172.18.0.5 --fixcsum --infile=template.pcap --outfile=dump.pcap

Let’s examine parameters:

--enet-dmac=4e:ae:72:a4:2a:96      
Says to replace L2 destination mac address into input file to specified in command line.
In our case we want destination mac address was equal host’s mac address.
--srcipmap=127.0.0.1:172.18.0.5
Says to replace L3 source ip address from 127.0.01 to 172.18.0.5
Since our container’s ip address is equal to 172.18.0.5 we put it in IP packets.

--fixcsum
Documentation says that IP checksum will be re-calculated automatically (but when I did commands separatelly, step by step, somehow checksum was not recalculated, that's why I prefer to pass it manually now)

--infile=part.pcap
Name of the input pcap file

--outfile=prep.pcap
Name of the output pcap file
Now we have changed destanation mac address and source IP address, moreover we pass --fixcsum parameter to recalculate IP check sum field.

Output file is ready to be replayed.

Let's check


Now we are goingn to check that trafic we send from eth1 is can be captured on eth0.
I will use socat tool for that.
In left pane of our tmux application run the following command:

socat UDP4-RECVFROM:11000,ip-add-membership=239.10.5.2:172.18.0.1, STDOUT

In the right pane (child container) run:
tcpreplay -i ceth1 dump.pcap

If everything goes smoothly you should see some printed bytes on left pane and then socat would exit.











It measn that socat was able to receive the traffic and everything works fine!
Now you can run your application in left pane instead of socat and run tcpreplay on the right pane.

Enjoy!

Sunday, August 9, 2015

Part I: Capturing multicast traffic over PPTP VPN

To start with, when you have VPN connection set up in Linux usually you have 2 (or more) network interfaces in the system. Let's call it eno1 and ppp0 (at least as they called in my CentOS 7), where eno1 is physical interface while ppp0 is virtual one.

Capturing traffic is not difficult task itself, we can do it using tcpdump tool.

For example: tcpdump -i eno1 -w fileToSave.pcap

One question is what interface to capture on? Either eno1 or ppp0 ...
Below few examples of capturing multicast traffic on different network interfaces and results:


Example of traffic on eno1 interface.



Example of traffic on ppp0 interface.

Pay attention how packets are nested in case of capturing it on eno1 interfacce.

Issue

At this point we have fileToSave.pcap file with dump, but there is one small issue which could spoil everything :)

If you are using wireshark tool you can easily look at file incapsulation type in Statistics -> Summary menu ( it also could be done using capinfos  command line utility which goes together with wireshark ).


Encapsulation type of traffic in case of eno1 interface.








Encapsulation type of traffic in case of ppp0 interface











The main issue here is that before we could play our traffic with tcpreplay we have to do some preparation using tcprewrite utility, but tcprewrite does not work with pcap files with encapsulation type "Linux cooked-mode". If you try, you would get the following error message:
"DLT_LINUX_SLL pcap's must contain only ethernet packets"

More about linux cooked-mode you could read here: Linux cooked-mode capture (SLL)

Solution

I am proud to say that I work with collegues who found elegant solution of the issue.
So all points of this solution go to Denis Pynkin.

In my case it was decided to use traffic captured from ppp0 interface (with synthetic SLL header). But to use traffic later we have to convert SLL header to Ethernet II header. But how to do that? Pretty sure that you miss the fact, that Ethernet II header is only 14 bytes while SLL header is 16 bytes long J (go back to the pictures shown and look more carefully). Don’t worry I missed that as well. The solution was simple: just to cut 2 odd bytes from SLL header. That’s all. And it could be done using editcap tool.

editcap -C 2 -F pcap -T ether input-file-name.pcap output-file-name.pcap

Here is the short description from man page on the flags:

-C  <choplen>:                            Sets the chop length to use when writing the packet data. Each packet is chopped by a few <choplen> bytes of data.

-F <file format>:                         Sets the file format of the output capture file.

-T <encapsulation type>:          Sets the packet encapsulation type of the output capture file.

As result, we would have output-file-name.pcap with Ethernet encapsulation type.
In next articles we would see how trafic should be modified, in order to be played by tcpreplay.


Greetings and plans ...

Hi there!

I would like to start this blog with short serie of articles about capturing, preparing and re-playing network traffic using different network tools under Linux. During these posts I would share my experience with network tools, issues and it's solutions.

For now I have the following plan in mind, but it could be changed during matirial preparation :



UPDATE: 16/08/2015: Link to part II is added.
UPDATE: 27/08/2015: Link to part III is added.

~ Evgeny Rybak