Intel® ISA Extensions
Use hardware-based isolation and memory encryption to provide more code protection in your solutions.

Intraregister sum

unrue
Beginner
862 Views
Dear Intel users,
i need to do many times an intraregister sum with intrinsic. For example:
x += a[0]+ a[1] + a[2] + a[3]
and a should be _m128 type.
How can i do that? Which is the faster way?
Thanks in advance!
0 Kudos
5 Replies
TimP
Honored Contributor III
862 Views
There is little consensus on this, except that the way you have written it may be one of the slower, yet straightforward attempts to speed it up or cut down on numeric variations such as
x += (a[0]+ a[1]) + (a[2] + a[3]);
are likely to be ignored by icc -fast (default) or even gcc -ffast-math.
If you are using SSE3, you can write in horizontal add, which will not be the fastest on all CPU types, although it should produce minimum number of instructions.
0 Kudos
unrue
Beginner
862 Views
Dearcikikamakuro,

the definition of hadd with two vector a and b is:
results= b2+b3 | b1+b0 | a2+a3 | a1+a0
this is not i want:
a0+a1+a2+a3
I can do using some shift or other, but not in only one assembly operation.


0 Kudos
TimP
Honored Contributor III
862 Views
Quoting unrue
a0+a1+a2+a3
I can do using some shift or other, but not in only one assembly operation.

Yes, haddps has to be used twice to produce the sum of the 4 operands. On Intel CPUs, other methods are likely to be slightly faster. If you're concerned about such detail, you may also wish to consider whether you want (a0+a1)+(a2+a3) or (a0+a2)+(a1+a3). The difference in numerical results usually is more noticeable than the difference in timing.
0 Kudos
xavierasm
Beginner
862 Views

//accumulating xmm[0]+xmm[1]+xmm[2]+xmm[3] into xmm[0]
//SSE3:

haddps xmm0,xmm0

haddps xmm0,xmm0

//SSE2:

movhlps xmm1, xmm0 // Get bit 64-127 from xmm1

addps xmm0, xmm1 // Sums are in 2 dwords

pshufd xmm1, xmm0, 1 // Get bit 32-63 from xmm0

addss xmm0, xmm1 // Sum is in one dword

//SSE:

movaps xmm1, xmm0

shufps xmm1, xmm1,(2+4*3+16*0+64*1)

addps xmm0, xmm1

movaps xmm1, xmm0

shufps xmm1, xmm0,(1+4*1+16*3+64*3)

addss xmm0, xmm1

------------------------------

I did not found yet how to do the same think on AVX m256 (ymm[0]+ymm[1]+...+ymm[7])
if anyone has done it. please let me know here

0 Kudos
Brijender_B_Intel
862 Views
If intention is to add 8 elements in a YMM register [x0, x1, ..... x7]. I dont think you will get any performance gain from AVX, it will be same as SSE2.
AVX has lane concept. 4 elements are in upper lane(x4-x7) and 4 elements are in lower lane (x0-x3). first you need to bring 4 elements down.


__m256 uLane = _mm256_permute2f128_ps(ymm0, 0x01);

// depending how you want to add 2 elements - result may differ as pointed out by Tim earlier.
//efficeint way is add two now:
ymm0 = _mm256_add_ps(ymm0, uLane);

follow SSE2 code now (as lower lane of ymm0 has 4 elements).
....
0 Kudos
Reply