Intel® ISA Extensions
Use hardware-based isolation and memory encryption to provide more code protection in your solutions.

avx instruction choice

zhangxiuxia
Beginner
829 Views
166 ..B1.22: # Preds ..B1.22 ..B1.21
167 #vmovupd (%r14,%r12,8), %xmm0 #23.27
168 #vmovupd (%r13,%r12,8), %xmm1 #24.27
169 #vinsertf128 $1, 16(%r13,%r12,8), %ymm1, %ymm3 #24.27
170 #vinsertf128 $1, 16(%r14,%r12,8), %ymm0, %ymm2 #23.27
171 vmovupd (%r14,%r12,8), %ymm2
172 vmovupd (%r13,%r12,8), %ymm3 #24.27

173
174 vmulpd %ymm3, %ymm2, %ymm4 #35.30
175 vaddpd (%rcx,%r12,8), %ymm4, %ymm5 #35.13
176 vmovupd %ymm5, (%rcx,%r12,8) #25.27
177
178 # vmovupd 32(%r14,%r12,8), %xmm6 #23.27
179 # vmovupd 32(%r13,%r12,8), %xmm7 #24.27
180 # vinsertf128 $1, 48(%r13,%r12,8), %ymm7, %ymm9 #24.27
181 # vinsertf128 $1, 48(%r14,%r12,8), %ymm6, %ymm8 #23.27
182 vmovupd 32(%r14,%r12,8), %ymm8 #23.27
183 vmovupd 32(%r13,%r12,8), %ymm9 #24.27

184
185 vmulpd %ymm9, %ymm8, %ymm10 #35.30
186 vaddpd 32(%rcx,%r12,8), %ymm10, %ymm11 #35.13
187
188 vmovupd %ymm11, 32(%rcx,%r12,8) #25.27
189 # vmovupd 64(%r14,%r12,8), %xmm12 #23.27
190 # vmovupd 64(%r13,%r12,8), %xmm13 #24.27
191 # vinsertf128 $1, 80(%r13,%r12,8), %ymm13, %ymm15 #24.27
192 # vinsertf128 $1, 80(%r14,%r12,8), %ymm12, %ymm14 #23.27
193
194 vmovupd 64(%r14,%r12,8), %ymm14 #23.27
195 vmovupd 64(%r13,%r12,8), %ymm15 #24.27

196
197 vmulpd %ymm15, %ymm14, %ymm0 #35.30
198 vaddpd 64(%rcx,%r12,8), %ymm0, %ymm1 #35.13
199
200 vmovupd %ymm1, 64(%rcx,%r12,8) #25.27
201 # vmovupd 96(%r14,%r12,8), %xmm2 #23.27
202 # vmovupd 96(%r13,%r12,8), %xmm3 #24.27
203 # vinsertf128 $1, 112(%r13,%r12,8), %ymm3, %ymm5 #24.27
204 # vinsertf128 $1, 112(%r14,%r12,8), %ymm2, %ymm4 #23.27
205 vmovupd 96(%r14,%r12,8), %ymm4 #23.27
206 vmovupd 96(%r13,%r12,8), %ymm5 #24.27

207
208 vmulpd %ymm5, %ymm4, %ymm6 #35.30
209 vaddpd 96(%rcx,%r12,8), %ymm6, %ymm7 #35.13
210 vmovupd %ymm7, 96(%rcx,%r12,8) #25.27
211 addq $16, %r12 #34.9

212 cmpq %rax, %r12 #34.9
213 jb ..B1.22 # Prob 82% #34.9


This is my code.


I modify the code and use the underlined italics to replace the "#" instruction , this can reduce instruction
number, make the code clear. but the performance droped seriously.

about 5% . Can anyone tell me why ?

why decomposed codes are faster ?
0 Kudos
9 Replies
Maxym_D_Intel
Employee
829 Views
it might take a while to reproduce your code to be compilable on my side

have you had a look on IACA details for both cases already?
0 Kudos
bronxzv
New Contributor II
829 Views

on Sandy Bridgea 256-bit unaligned load/store is slower than two 128-bit loads/stores, that's why the code you replaced (probably compiler generated, isn't it ?) is faster

your variant will be probably faster on future CPUs like Haswell, though, hint: the Intel C++ compiler no more split 256-bit unaligned loads/stores in two parts for AVX2 targets

Have a look atthis post : http://www.realworldtech.com/forums/index.cfm?action=detail&id=127175&threadid=127150&roomid=2

0 Kudos
TimP
Honored Contributor III
829 Views
I understood that Ivy Bridge was to support effective AVX-256 vmovups, although evidently Haswell would make it more advantageous. It was understood that MSVC++ intrinsics programmers didn't care to observe recommendations to split unaligned moves explicitly.
Even on Westmere, when the frequency of misalignment is 50% or more, it will pay to split unaligned movups into 64-bit moves, contrary to the official architecture recommendations.
0 Kudos
zhangxiuxia
Beginner
829 Views
Yes, I looked on IACA details for both codes.

the first one,
1 Intel Architecture Code Analyzer Version - 1.1.3
2 Analyzed File - spmv_dia_icc_avx_nounroll.o
3 Binary Format - 64Bit
4 Architecture - Intel AVX
5
6 Analysis Report
7 ---------------
8 Total Throughput: 12 Cycles; Throughput Bottleneck: Port2_ALU, Port2_DATA, Port3_ALU, Port3_DATA
9 Total number of Uops bound to ports: 46
10 Data Dependency Latency: 16 Cycles; Performance Latency: 28 Cycles
11
12 Port Binding in cycles:
13 -------------------------------------------------------
14 | Port | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 |
15 -------------------------------------------------------
16 | Cycles | 7 | 0 | 5 | 12 | 12 | 12 | 12 | 8 | 6 |
17 -------------------------------------------------------
18
19 N - port number, DV - Divider pipe (on port 0), D - Data fetch pipe (on ports
20 CP - on a critical Data Dependency Path
21 N - number of cycles port was bound
22 X - other ports that can be used by this instructions
23 F - Macro Fusion with the previous instruction occurred
24 ^ - Micro Fusion happened
25 * - instruction micro-ops not bound to a port
26 @ - Intel AVX to Intel SSE code switch, dozens of cycles penalty is expe
27 ! - instruction not supported, was not accounted in Analysis
28
29 | Num of | Ports pressure in cycles | |
30 | Uops | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 | |
31 ------------------------------------------------------------
32 | 1 | | | | 1 | 1 | X | X | | | CP | vmovupd xmm0, xmmw
33 | 1 | | | | X | X | 1 | 1 | | | CP | vmovupd xmm1, xmmw
34 | 2 | 1 | | | X | X | 1 | 1 | | X | CP | vinsertf128 ymm3,
35 | 2 | X | | | X | X | 1 | 1 | | 1 | CP | vinsertf128 ymm2,
36 | 1 | 1 | | | | | | | | | CP | vmulpd ymm4, ymm2,
37 | 2 | | | 1 | 1 | 2 | X | X | | | CP | vaddpd ymm5, ymm4,
38 | 2 | | | | 1 | | X | | 2 | | CP | vmovupd ymmword pt
39 | 1 | | | | 1 | 1 | X | X | | | CP | vmovupd xmm6, xmmw
40 | 1 | | | | X | X | 1 | 1 | | | CP | vmovupd xmm7, xmmw
41 | 2 | X | | | X | X | 1 | 1 | | 1 | CP | vinsertf128 ymm9,
42 | 2 | 1 | | | X | X | 1 | 1 | | X | CP | vinsertf128 ymm8,
43 | 1 | 1 | | | | | | | | | CP | vmulpd ymm10, ymm8
44 | 2 | | | 1 | 1 | 2 | X | X | | | CP | vaddpd ymm11, ymm1
45 | 2 | | | | 1 | | X | | 2 | | CP | vmovupd ymmword pt
46 | 1 | | | | 1 | 1 | X | X | | | CP | vmovupd xmm12, xmm
47 | 1 | | | | X | X | 1 | 1 | | | CP | vmovupd xmm13, xmm
48 | 2 | X | | | X | X | 1 | 1 | | 1 | CP | vinsertf128 ymm15,
49 | 2 | X | | | X | X | 1 | 1 | | 1 | CP | vinsertf128 ymm14,
50 | 1 | 1 | | | | | | | | | CP | vmulpd ymm0, ymm14
51 | 2 | | | 1 | 1 | 2 | X | X | | | CP | vaddpd ymm1, ymm0,
52 | 2 | | | | 1 | | X | | 2 | | CP | vmovupd ymmword pt
53 | 1 | | | | 1 | 1 | X | X | | | CP | vmovupd xmm2, xmmw
54 | 1 | | | | X | X | 1 | 1 | | | CP | vmovupd xmm3, xmmw
55 | 2 | X | | | X | X | 1 | 1 | | 1 | CP | vinsertf128 ymm5,
56 | 2 | 1 | | | X | X | 1 | 1 | | X | CP | vinsertf128 ymm4,
57 | 1 | 1 | | | | | | | | | CP | vmulpd ymm6, ymm4,
58 | 2 | | | 1 | 1 | 2 | X | X | | | CP | vaddpd ymm7, ymm6,
59 | 2 | | | | 1 | | X | | 2 | | CP | vmovupd ymmword pt
60 | 1 | X | | 1 | | | | | | X | | add r12, 0x10
61 | 1 | X | | X | | | | | | 1 | | cmp r12, rax
62 | 0F | | | | | | | | | | | jb 0xfffffffffffff


the second case:
1 Intel Architecture Code Analyzer Version - 1.1.3
2 Analyzed File - spmv_dia_avx_me.o
3 Binary Format - 64Bit
4 Architecture - Intel AVX
5
6 Analysis Report
7 ---------------
8 Total Throughput: 12 Cycles; Throughput Bottleneck: Port2_DATA
9 Total number of Uops bound to ports: 27
10 Data Dependency Latency: 17 Cycles; Performance Latency: 26 Cycles
11
12 Port Binding in cycles:
13 -------------------------------------------------------
14 | Port | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 |
15 -------------------------------------------------------
16 | Cycles | 3 | 0 | 3 | 8 | 12 | 7 | 10 | 8 | 2 |
17 -------------------------------------------------------
18
19 N - port number, DV - Divider pipe (on port 0), D - Data fetch pipe (on ports
20 CP - on a critical Data Dependency Path
21 N - number of cycles port was bound
22 X - other ports that can be used by this instructions
23 F - Macro Fusion with the previous instruction occurred
24 ^ - Micro Fusion happened
25 * - instruction micro-ops not bound to a port
26 @ - Intel AVX to Intel SSE code switch, dozens of cycles penalty is expe
27 ! - instruction not supported, was not accounted in Analysis
28
29 | Num of | Ports pressure in cycles | |
30 | Uops | 0 - DV | 1 | 2 - D | 3 - D | 4 | 5 | |
31 ------------------------------------------------------------
32 | 1 | | | | 1 | 2 | X | X | | | | vmovupd ymm2, ymmw
33 | 1 | | | | X | X | 1 | 2 | | | | vmovupd ymm3, ymmw
34 | 2 | | | | X | | 1 | | 2 | | | vmovapd ymmword pt
35 | 1 | | | | 1 | 2 | X | X | | | CP | vmovupd ymm8, ymmw
36 | 1 | | | | X | X | 1 | 2 | | | CP | vmovupd ymm9, ymmw
37 | 1 | 1 | | | | | | | | | CP | vmulpd ymm10, ymm8
38 | 2 | | | 1 | 1 | 2 | X | X | | | CP | vaddpd ymm11, ymm1
39 | 2 | | | | 1 | | X | | 2 | | CP | vmovapd ymmword pt
40 | 1 | | | | X | X | 1 | 2 | | | CP | vmovupd ymm14, ymm
41 | 1 | | | | 1 | 2 | X | X | | | CP | vmovupd ymm15, ymm
42 | 1 | 1 | | | | | | | | | CP | vmulpd ymm0, ymm14
43 | 2 | | | 1 | X | X | 1 | 2 | | | CP | vaddpd ymm1, ymm0,
44 | 2 | | | | X | | 1 | | 2 | | CP | vmovapd ymmword pt
45 | 1 | | | | 1 | 2 | X | X | | | CP | vmovupd ymm4, ymmw
46 | 1 | | | | X | X | 1 | 2 | | | CP | vmovupd ymm5, ymmw
47 | 1 | 1 | | | | | | | | | CP | vmulpd ymm6, ymm4,
48 | 2 | | | 1 | 1 | 2 | X | X | | | CP | vaddpd ymm7, ymm6,
49 | 2 | | | | 1 | | X | | 2 | | CP | vmovapd ymmword pt
50 | 1 | X | | X | | | | | | 1 | | add r12, 0x10
51 | 1 | X | | X | | | | | | 1 | | cmp r12, rax
52 | 0F | | | | | | | | | | | jb 0xfffffffffffff

0 Kudos
zhangxiuxia
Beginner
829 Views
when comparing the result reports of two cases.

8 Total Throughput: 12 Cycles; Throughput Bottleneck: Port2_ALU, Port2_DATA, Port3_ALU, Port3_DATA
9 Total number of Uops bound to ports: 46
10 Data Dependency Latency: 16 Cycles; Performance Latency: 28 Cycles



8 Total Throughput: 12 Cycles; Throughput Bottleneck: Port2_DATA
9 Total number of Uops bound to ports: 27
10 Data Dependency Latency: 17 Cycles; Performance Latency: 26 Cycles

the seperated 128bit mov version , has a lower Data Dependency Latency, and
256 bit mov has lower performance latency.


From the result ,i seems 256bit mov is faster.
0 Kudos
zhangxiuxia
Beginner
829 Views
Yes, 128bit loads/stores is compiler generated .
0 Kudos
styc
Beginner
829 Views

But doesn't that contradict your measurements? I believe that somewhere else in this forum somebody mentioned that IACA is hardly precise for Sandy Bridge.

0 Kudos
zhangxiuxia
Beginner
829 Views
I looked up some threads posted up on IACA board , and find someone pointed out that
IACA is not very precise for Sandybridge, since when choice arch=AVX , it is westmere+avx, a vitural architecture, not Sandybridge.
0 Kudos
Max_L
Employee
829 Views

256-bit loads that miss L1 (and/or especially misaligned) may indeed be less efficient than a pair of 128-bit ones in Sandy Bridge (it's addressed in the future micro-architectures) - alternatively, you can try to mitigate by generating prefetch0 for every 64-byte cache line somewhat in advance before issuing 256-bit loads into it - such a code may also perform better on average in the future than split 128-bit loads - you must test performance however and be certain you are not making things worse e.g. in the case data set fits into L1, you dont need a prefetch.


-Max

0 Kudos
Reply