Nios® V/II Embedded Design Suite (EDS)
Support for Embedded Development Tools, Processors (SoCs and Nios® V/II processor), Embedded Development Suites (EDSs), Boot and Configuration, Operating Systems, C and C++
12596 Discussions

TASK_STACKSIZE setting in Simple Socket Server

Altera_Forum
Honored Contributor II
1,220 Views

I just got the Nios II Simple Socket Server example app working on a custom board with a Cyclone III and 512KB SRAM (no other RAM). the original elf file was almost 900kbytes, with a significant portion of that in .bss, so it wouldn't fit in available memory. 

 

To get the memory footprint of the app down to a manageable size i changed the task_stacksize in niosii_simple_socet_server.h from 32768 to 2048.  

 

This gave me an elf file size of 339kb and after considerable fussing around with other details, I got the SSS to run correctly on my board. Wooo Hooo !!! 

 

Then I added the el camino sd/mmc spi core component to Qsys and tried to run SSS with this configuration to see if anything had broken in the process. The ELF file was still 339KB, but the app crashed after the DHCP task completed with this display on the Nios II Console: 

 

Nios II Simple Socket Server starting up. 

Created "monitor_phy" task (Prio: 9) 

panic: stack alloc 

dtrap - needs breakpoint 

ip_exit: calling func 0x101c794 

netclose: closing iface Altera TSE MAC ethernet 

ip_exit: calling func 0x102428c 

 

i changed the task_stacksize from 2048 to 1024 and now the SSS runs without error. 

All well and good for now.  

 

 

My question is: 

 

why was the task_stacksize set so high in the first place, and am i setting myself up for more problems down the road by reducing it to 1024? 

 

 

Thanks in advance for your comments and suggestions.
0 Kudos
7 Replies
Altera_Forum
Honored Contributor II
440 Views

Required stack size mainly depends from the level of function nesting and size of local variables. You can reduce stack size if you inline functions and declare as static any big local variable (i.e. struct or array) 

Determining the correct stack size is not straightforward because you must account for the worst case condition, which could be not easy to achieve. 

For SSS like projects I've never used stack sizes below 4096. I fear 1024 is a rather small value, unless your Simple Socket Server is really simple. 

A method for monitoring stack utilization I sometimes used is initializing stack with a known data pattern, run the application for a while and then peek into the stack area in order to see the maximum usage.
0 Kudos
Altera_Forum
Honored Contributor II
440 Views

 

--- Quote Start ---  

 

 

A method for monitoring stack utilization I sometimes used is initializing stack with a known data pattern, run the application for a while and then peek into the stack area in order to see the maximum usage. 

--- Quote End ---  

 

 

What a clever idea. I will try this as my design progresses to see how the stack usage changes. Right now it the low stack size seems to be okay, but maybe it will eventually need to increase. 

 

Thanks so much, I appreciate your comments.
0 Kudos
Altera_Forum
Honored Contributor II
440 Views

One problem is that the maximum stack use is likely to be in an obscure error path... 

If your code has no recursion, no calls to alloca() and no (or only identifyable) function pointers it is possible to determine the maximum stack use by analysing the object code. Parsing the output of gcc -S --fverbose-asm is possible the easiest source data.
0 Kudos
Altera_Forum
Honored Contributor II
440 Views

 

--- Quote Start ---  

One problem is that the maximum stack use is likely to be in an obscure error path... 

If your code has no recursion, no calls to alloca() and no (or only identifyable) function pointers it is possible to determine the maximum stack use by analysing the object code. Parsing the output of gcc -S --fverbose-asm is possible the easiest source data. 

--- Quote End ---  

 

 

Thanks dsl. The "obscure error path" is certainly what I am trying to avoid, while at the same time getting the application to fit in available memory. I expect my final application will differ significantly from the SSS example, so I will have to monitor the stack usage to make sure I keep it large enough. 

 

This brings to mind another question or two about stack size assignment: 

 

can stack size be assigned on a per-task basis, with each task being assigned its unique stack size rather than use the global variable task_stacksize? 

 

 

If this is possible, would your above prescribed method of analyzing the object code work for determining how much stack memory individual tasks are using?
0 Kudos
Altera_Forum
Honored Contributor II
440 Views

Your first problem is probably sorting out the stack used by calls into the OS. 

One thing that can help stack problems is to use a dedicated interrupt stack. 

You then don't need to allow space for the interrupts on all the stacks, and don't have issues when an IRQ interrupts code near to the stack limit. 

I don't know anything about the OS you are using... all the IO boards I've used run 'bare board', usually manage to write a buffer manager and interrupt scheme but that is about it, don't bother with any blocking calls and task switching.
0 Kudos
Altera_Forum
Honored Contributor II
440 Views

 

--- Quote Start ---  

 

can stack size be assigned on a per-task basis, with each task being assigned its unique stack size rather than use the global variable task_stacksize? 

 

 

--- Quote End ---  

 

With uC/OS a different stack size can be specified for each task. If I remember correctly you define it runtime when you create the task. 

 

@dsl 

With any OS you generally are not supposed to use hardware interrupts. The task scheduler (which has its own stack) takes care of all interrupts and will wake any pending task when required. So, what application tasks actually see are OS events which emulate interrupts. For example what in a 'bare board' operation is a timer isr, in a OS environment is a task which is usually put in a sleep state and awakes only when the OS schedules it to emulate a timer interrupt. Even this timer is usually not real, but it's emulated using the OS system timer tickcount.
0 Kudos
Altera_Forum
Honored Contributor II
440 Views

Yes, in uCOS you allocate yourself the stack when creating the task, either with a memory allocation call or a static table. And you specify the stack size in the OsCreateTaskEx() call. 

As for the interrupts, when you use a multitasking OS they are usually hidden from the application, but are still occurring and handled by the drivers. The ISR will then just drop the registers on the stack currently in use, so it can be any task stack.
0 Kudos
Reply