Community
cancel
Showing results for 
Search instead for 
Did you mean: 
idata
Community Manager
3,120 Views

PREEMPT RT on Intel Edison

Hello!

 

I run into trouble trying to patch the PREEMPT RT patch to the Intel Edison.

 

Steps I've done:

1) Get Sources from http://downloadmirror.intel.com/25028/eng/edison-src-ww25.5-15.tgz http://downloadmirror.intel.com/25028/eng/edison-src-ww25.5-15.tgz

 

(I am using an older release because it seems that some things changed and the BSP guide has not been updated)

2) make setup

3) cd edison-src/out/linux64

 

source poky/oe-init-build-env

bitbake edison-image

So far so good. PostBuild.sh and flashing to the Edison works with no errors. But the problems appear when I try to patch:

cd edison-src/meta-intel-edison/meta-intel-edison-bsp/recipes-kernel/linux/files

wget https://www.kernel.org/pub/linux/kernel/projects/rt/3.10/older/patch-3.10.17-rt12.patch.bz2 https://www.kernel.org/pub/linux/kernel/projects/rt/3.10/older/patch-3.10.17-rt12.patch.bz2

bzip2 -d patch-3.10.17-rt12.patch.bz2

vim ../linux-yocto_3.10.bbappend

Add "SRC_URI += "file://patch-3.10.17-rt12.patch""

4) cd edison-src/out/linux64/build

bitbake virtual/kernel -c menuconfig

Then the error occurs (please see the attached document) (Error Log1).

There are other instructions in the web how to patch the PREEMPT RT patch to the linux kernel ( ). But I also get an error following these instructions: Bitbaking menuconfig works; But when I try to bitbake edison-image I get an error (Error Log 2).

I am building on Ubuntu 14.04 64bit. Does anyone know that to do in this case?

Thank you very much for any help!

19 Replies
FerryT
Valued Contributor I
81 Views

I have been able to apply a patch and build. I'll try to lookup how we did that and report back here. We haven't tested the result yet, but I expect to have some results (good or bad) in the next weeks.

idata
Community Manager
81 Views

Hello FerryT,

Would be really great if you could post the steps and on which OS you built.

Thank you very much!

idata
Community Manager
81 Views

Has someone experienced these errors? Or could anyone build the image with the Preempt RT patch?

idata
Community Manager
81 Views

Hi tannerl,

 

 

We've been researching about the preempt RT patch and the Edison but this patch is not validated nor supported on the Edison. We've already made the recommendation to see PREEMPT_RT support in future releases.

 

 

@FerryT: What were your results when trying to apply the patch?

 

 

-Sergio

 

FerryT
Valued Contributor I
81 Views

The patch builds.

This week we are developing a simple test that will allow two Edisons to communicate with 2Mb/s and will allow us to grap some scope screenshots that will show timing on the standard (Edison 3.10.17, now a bit outdated) kernel.

By the end of next week we hope to flash the PREEMPT RT kernel (based on the same 3.10.17) using the same test software and grab new screenshots to compare.

We intend to publish the communication test and the yocto layer or recipe but haven't decided yet how (may be github), I'll revisit here to report.

idata
Community Manager
81 Views

Hi FerryT,

Thanks for the update!

It would be really great if you could share this with us. Thank you!

FerryT
Valued Contributor I
81 Views

Ok, so we finally got code that uses the HSU (uart) running at 2Mb/sec. It writes each 15ms a 1024 byte packet and read independently.

Because of this you can tie the RX to the TX of a single Edison (read and write occur simultaneously) or with a cross between 2 Edisons.

With no preempt-rt we see the sending of packets nicely with a minimum of 15ms interval, with some jitter of 2ms, but sometimes this much longer. I have been using iperf as a stress tester as that uses the USB (to our ethernet port) or the wifi and seen the delay sometimes going up to even 40ms! I believe this is a realistic stress test for our application . We do not seem te be loosing any received packet, although I am a bit unhappy with the large delay that occurs after all the bytes have been received (caused by the dma transfer timeout?)

I'll post some sceenshots of the oscilloscope later.

Also today we not only build the preempt-rt but also flashed the image and got it to run. More results will follow after validating our measurements.

FerryT
Valued Contributor I
81 Views

So, what does our code do?

int getNumberOfAvailableBytes(int fd) {

int nbytes;

ioctl(fd, FIONREAD, &nbytes);

return nbytes;

}

to get the number of bytes in the buffer, and read only the bytes present. However this does not seem te be working well as we always get 0 or the complete received message. I suspect again this is due to the DMA waiting for a certain timeout before signalling the bytes in the buffer. Meh. I hate unnecessary waits.

  • we take a full buffer of 1024 char and use base64 encode, to make sure the chars NULL, STX, ETX and \FF can never appear in a message. This multiplies the number of bytes by 1.33x.
  • before encoding we calculate a CRC32C on the message using the fastest algorithm available (intel's assembly one from the kernel, but ported to C) and put that at the end, overwriting the last 4 NULL bytes.
  • we put a STX at the beginnen and ETX at the end
  • we send this each 15ms and toglge a pin (using mmap, but that might be a mistake for preempt_rt as I read today it will cause https://rt.wiki.kernel.org/index.php/Frequently_Asked_Questions https://rt.wiki.kernel.org/index.php/Frequently_Asked_Questions that causes page faults, which is not got for a RT application. Or does that not apply here?). The pin is toggled just before and after the write.
  • we receive continuously whatever is in the receive buffer, strip whatever is between STX and ETX, decode it and if the message length is 1024 verify the CRC32C. The pin is toggled just before the decode and just after the CRC checks out, so a failed CRC will flip the polarity of this signal.
FerryT
Valued Contributor I
81 Views

So how does that look like?

Here is the edison talking to itself, without stressing.

https://drive.google.com/open?id=0B272plWyW_YWNThhTkllNnFkN3M C

CH3 shows the transmit toggle, CH4 the decode/crc check toggle, CH2 the data (sent/received simultaneously as the TX is looped back).

Our scope nicely shows a 43 - 98 us delay from the write command to the first byte actually transmitting.

The we see a whopping 2ms delay from the last byte received to the decode phase.

Time between transmits is 15.0 to 15.6ms, with a stdev of 40us (those in the know will recognize a 6x sigma >> 0.6ms, so these are outliers as can be expected in a non-rt OS).

Our message consists of 1372 bytes (1024 base64 encoded, + 2 preambles + STX + ETX) which is 15092 bits (11 bits/char) = 7.546ms.

Time to the decode is 8.1 - 9.95ms.

The DMA timeout takes 0.55 - 2,4ms (grrr), however looking at the mean is normally 2ms +/- 0.15ms.

FerryT
Valued Contributor I
81 Views

Things are less rosy when stressing using iperf (this was over our eth modem which achieves 80Mb/s).

Sending bytes sometimes gets delayed up to 40ms (from the statistics), while the screen shot shows (probably) a task switch happening in the middle of the decode/crc phase. The first decode toggle is so far delayed that it happens during the transmit of the next message.

FerryT
Valued Contributor I
81 Views

Just for fun, communication between 2 edisons (unstressed).

CH1 is now the data received from the other edison (when ready decoded CH4 shows the pulse).

FerryT
Valued Contributor I
81 Views

I do have data from the preempt-rt kernel, however, I just I need to set the priority of the user space program right, lock memory etc, which we didn't do.

But, here is a screen shot from our kernel:

I printed all messages from journalctl with preempt as I was looking for kernel errors (there were some). Need to look into that, so far it seems harmless with kernel booting, networking, wireless, toggling pins and serial.

FerryT
Valued Contributor I
81 Views

How did we build? We created our own layer so we don't have to meddle with the one from http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel-edison/ http://git.yoctoproject.org/cgit/cgit.cgi/meta-intel-edison/. In the layer we add a recipe that patches the kernel for rt_preempt, increases the DMA buffer, adds our eth driver and a driver for an additional USB/serial. It also copies the pre configured kernel conf file to the build directory (yeah, that could be nicer by patching the kernel conf file, but I figured as the meta-intel-edison layer just copies the conf file, so can I).

The recipe is shared here: https://drive.google.com/open?id=0B272plWyW_YWb1l4U25kb0RnamM https://drive.google.com/open?id=0B272plWyW_YWb1l4U25kb0RnamM

FerryT
Valued Contributor I
81 Views

I hope do modify the priority of the program, and redo the preempt_rt measurements and upload the code to github, tomorrow or.

A note about the code: it has been done by our intern who had 0 experience in programming, and needed to learn C, make, git, bitbake, oscilloscopes in 10 days. He did get a bit of help from us of course :-) (he's my son).

idata
Community Manager
81 Views

Wow - Thanks for your detailed answer!

Seems to be good work (sounds like an interesting internship for your son ). I will have a look at your recipe and get back to you.

Thanks a lot!

idata
Community Manager
81 Views

Thank you for the detailed answer indeed, this will definitively help other users in the community. We encourage you to remain involved in the community.

-Sergio

 

FerryT
Valued Contributor I
81 Views

Of course building preempt_rt was based for a large part on the work described here: /thread/58653 https://communities.intel.com/thread/58653

FerryT
Valued Contributor I
81 Views

OK, so I learned something: to get the user space program behave RT, you need to do a few things, set the priority, lock memory etc. All nicely described here https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO https://rt.wiki.kernel.org/index.php/RT_PREEMPT_HOWTO.

When we do that, I get the following scope picture (edison talking to itself, not stressed):

And again stressing with iperf:

So, looking at the time from toggling C3 to start of data, without stress it is max 200us and with stress 1ms. While from starting to send until decoded 9.1 vs. 9.8ms.

I would conclude: preempt_rt is really working and useful, especially for a IOT device it should be standard.

It does come at a cost: wifi speed is about halved from 25 Mbits/sec to 13 Mbits/sec.

Another note: sometimes the message but not decoded. Analyzing the data I found in those cases about 6 bytes to be missing, which suggests overrun errors. These are not shown in the stream as \00 characters, probably due to using DMA to transfer from the serial port.

FerryT
Valued Contributor I
81 Views

I just pushed our code to https://github.com/htot/hs_uart https://github.com/htot/hs_uart.

If you want to cross compile don't forget you need to:

# source ../environment-setup-core2-32-poky-linux

before running make (this assuming you have a link to that script in ../)

Reply