Intel® Fortran Compiler
Build applications that can scale for the future with optimized code designed for Intel® Xeon® and compatible processors.
28446 Discussions

Small vector code generation problems

jimdempseyatthecove
Honored Contributor III
322 Views
Place the following into three files (you pick names)

[fortran]MODULE MOD_ALL ! AVX four-up double vector type TypeYMM SEQUENCE real(8) :: v(0:3) end type TypeYMM ! AVX three-up double vector type TypeYMM02 SEQUENCE real(8) :: v(0:2) end type TypeYMM02 ! SSE two-up double vector type TypeXMM SEQUENCE real(8) :: v(0:1) end type TypeXMM END MODULE MOD_ALL ----------------------------------- MODULE MOD_UTIL interface CROSS RECURSIVE SUBROUTINE CROSS_r_r_r(VA,VB,VC) use MOD_ALL real :: VA(3), VB(3), VC(3) END SUBROUTINE CROSS_r_r_r RECURSIVE SUBROUTINE CROSS_r_r_ymm(VA,VB, VC) USE MOD_ALL real :: VA(3), VB(3) type(TypeYMM) :: VC(3) END SUBROUTINE CROSS_r_r_ymm RECURSIVE SUBROUTINE CROSS_r_r_ymm02(VA,VB, VC) USE MOD_ALL real :: VA(3), VB(3) type(TypeYMM02) :: VC(3) END SUBROUTINE CROSS_r_r_ymm02 RECURSIVE SUBROUTINE CROSS_r_r_xmm(VA,VB, VC) USE MOD_ALL real :: VA(3), VB(3) type(TypeXMM) :: VC(3) END SUBROUTINE CROSS_r_r_xmm RECURSIVE SUBROUTINE CROSS_ymm_ymm_ymm(VA,VB,VC) use MOD_ALL type(TypeYMM) :: VA(3), VB(3), VC(3) END SUBROUTINE CROSS_ymm_ymm_ymm RECURSIVE SUBROUTINE CROSS_ymm02_ymm02_ymm02(VA,VB,VC) use MOD_ALL type(TypeYMM02) :: VA(3), VB(3), VC(3) END SUBROUTINE CROSS_ymm02_ymm02_ymm02 RECURSIVE SUBROUTINE CROSS_xmm_xmm_xmm(VA,VB, VC) use MOD_ALL type(TypeXMM) :: VA(3), VB(3), VC(3) END SUBROUTINE CROSS_xmm_xmm_xmm end interface CROSS END MODULE MOD_UTIL ----------------------------------- RECURSIVE SUBROUTINE CROSS_r_r_r(VA,VB, VC) ! !************************************* ! CROSS PRODUCT OF 3 VECTORS ! (CALCULATION IS PROTECTED ALLOWING INPUT TO BE OVERWRITTEN) USE MOD_UTIL USE MOD_ALL real :: VA(3), VB(3), VC(3) real, automatic :: VS(3) ! COMPUTE THE CROSS PRODUCT VS(1) = VA(2)*VB(3) - VA(3)*VB(2) VS(2) = VA(3)*VB(1) - VA(1)*VB(3) VS(3) = VA(1)*VB(2) - VA(2)*VB(1) ! TRANSFER THE CROSS PRODUCT RESULT TO THE OUTPUT VECTOR VC(1) = VS(1) VC(2) = VS(2) VC(3) = VS(3) END SUBROUTINE CROSS_r_r_r RECURSIVE SUBROUTINE CROSS_r_r_ymm(VA,VB, VC) ! !************************************* ! CROSS PRODUCT OF 3 VECTORS ! (CALCULATION IS PROTECTED ALLOWING INPUT TO BE OVERWRITTEN) USE MOD_UTIL USE MOD_ALL real :: VA(3), VB(3) type(TypeYMM) :: VC(3) ! COMPUTE THE CROSS PRODUCT ! N.B. VC cannot overlap VA or VB !DEC$ VECTOR ALWAYS VC(1).v = VA(2)*VB(3) - VA(3)*VB(2) !DEC$ VECTOR ALWAYS VC(2).v = VA(3)*VB(1) - VA(1)*VB(3) !DEC$ VECTOR ALWAYS VC(3).v = VA(1)*VB(2) - VA(2)*VB(1) END SUBROUTINE CROSS_r_r_ymm RECURSIVE SUBROUTINE CROSS_r_r_ymm02(VA,VB, VC) ! !************************************* ! CROSS PRODUCT OF 3 VECTORS ! (CALCULATION IS PROTECTED ALLOWING INPUT TO BE OVERWRITTEN) USE MOD_UTIL USE MOD_ALL real :: VA(3), VB(3) type(TypeYMM02) :: VC(3) ! COMPUTE THE CROSS PRODUCT ! N.B. VC cannot overlap VA or VB !DEC$ VECTOR ALWAYS VC(1).v = VA(2)*VB(3) - VA(3)*VB(2) !DEC$ VECTOR ALWAYS VC(2).v = VA(3)*VB(1) - VA(1)*VB(3) !DEC$ VECTOR ALWAYS VC(3).v = VA(1)*VB(2) - VA(2)*VB(1) END SUBROUTINE CROSS_r_r_ymm02 RECURSIVE SUBROUTINE CROSS_r_r_xmm(VA,VB, VC) ! !************************************* ! CROSS PRODUCT OF 3 VECTORS ! (CALCULATION IS PROTECTED ALLOWING INPUT TO BE OVERWRITTEN) USE MOD_UTIL USE MOD_ALL real :: VA(3), VB(3) type(TypeXMM) :: VC(3) ! COMPUTE THE CROSS PRODUCT ! N.B. VC cannot overlap VA or VB !DEC$ VECTOR ALWAYS VC(1).v = VA(2)*VB(3) - VA(3)*VB(2) !DEC$ VECTOR ALWAYS VC(2).v = VA(3)*VB(1) - VA(1)*VB(3) !DEC$ VECTOR ALWAYS VC(3).v = VA(1)*VB(2) - VA(2)*VB(1) END SUBROUTINE CROSS_r_r_xmm RECURSIVE SUBROUTINE CROSS_ymm_ymm_ymm(VA,VB,VC) use MOD_ALL type(TypeYMM) :: VA(3), VB(3), VC(3) type(TypeYMM), automatic :: VS(3) ! COMPUTE THE CROSS PRODUCT !DEC$ VECTOR ALWAYS VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v !DEC$ VECTOR ALWAYS VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v !DEC$ VECTOR ALWAYS VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v ! TRANSFER THE CROSS PRODUCT RESULT TO THE OUTPUT VECTOR !DEC$ VECTOR ALWAYS VC(1).v = VS(1).v !DEC$ VECTOR ALWAYS VC(2).v = VS(2).v !DEC$ VECTOR ALWAYS VC(3).v = VS(3).v END SUBROUTINE CROSS_ymm_ymm_ymm RECURSIVE SUBROUTINE CROSS_ymm02_ymm02_ymm02(VA,VB,VC) use MOD_ALL type(TypeYMM02) :: VA(3), VB(3), VC(3) type(TypeYMM02), automatic :: VS(3) ! COMPUTE THE CROSS PRODUCT !DEC$ VECTOR ALWAYS VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v !DEC$ VECTOR ALWAYS VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v !DEC$ VECTOR ALWAYS VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v ! TRANSFER THE CROSS PRODUCT RESULT TO THE OUTPUT VECTOR VC(1).v = VS(1).v !DEC$ VECTOR ALWAYS VC(2).v = VS(2).v !DEC$ VECTOR ALWAYS VC(3).v = VS(3).v END SUBROUTINE CROSS_ymm02_ymm02_ymm02 RECURSIVE SUBROUTINE CROSS_xmm_xmm_xmm(VA,VB, VC) use MOD_ALL type(TypeXMM) :: VA(3), VB(3), VC(3) type(TypeXMM), automatic :: VS(3) ! COMPUTE THE CROSS PRODUCT !DEC$ VECTOR ALWAYS VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v !DEC$ VECTOR ALWAYS VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v !DEC$ VECTOR ALWAYS VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v ! TRANSFER THE CROSS PRODUCT RESULT TO THE OUTPUT VECTOR !DEC$ VECTOR ALWAYS VC(1).v = VS(1).v !DEC$ VECTOR ALWAYS VC(2).v = VS(2).v !DEC$ VECTOR ALWAYS VC(3).v = VS(3).v END SUBROUTINE CROSS_xmm_xmm_xmm [/fortran]
Compile with fulloptimization and with AVX enabled
and with producing Assembler output with source.

Rename Assembler output fie

Remove all

!DEC$ VECTOR

Recompile and open up both assembler files.

To save time, scroll both files down to the last subroutine
RECURSIVE SUBROUTINE CROSS_xmm_xmm_xmm(VA,VB, VC)


Two issues

a) the compiler optimizations will not vectorize the code without the !DEC$ VECTOR's
b) bad code is generated without the !DEC$ VECTOR's

Jim Dempsey
0 Kudos
6 Replies
jimdempseyatthecove
Honored Contributor III
322 Views
I forgot to mention, with !DEC$ VECTOR ALWAYS it produces real nice code.

*** note ***

The documentation states:

The VECTOR and NOVECTOR directives control vectorization of the DO loop that directly follows the directive.

Which is not completely correct.

In this case, I use (and the compiler obeys) a requres to vectorize the statement following the

!DEC$ VECTOR ALWAYS

Unfortunately I cannot specify !DEC$ VECTOR ALWAYS for a group of lines or for a subroutine in total.
Inserting

!DEC$ VECTOR ALWAYS
do
statement
statement
...
exit
end do

seems in appropriate

Consider:

!DEC$ BEGIN VECTOR ALWAYS
statement
statement
...
!DEC$ END VECTOR ALWAYS

as a feature request.

Jim Dempsey
0 Kudos
TimP
Honored Contributor III
322 Views
I suppose the compiler developers forgot to change the documentation to DO loop or array assignment back when they agreed to make the directive apply to the latter. If you are a stickler for terminology, the optimization reports shouldn't be referrring to loops.
When I set /arch:SSE4.1, and remove the VECTOR ALWAYS directives, the compiler I have active currently reports vectorization of 8 of the 9 cross product assignments which involve arithmetic. I don't see an obvious reason why just 1 of those 9 should be reported inefficient. 9 are reported vectorized for CORE-AVX2 (which we could run only on the SDE); only 3 of those optimizations are on the same assignments as SSE4.1 vectorizes.
21 assignments for SSE4.1 are reported as "completely unrolled" which probably is the way to go if the alignments aren't known and the compiler is attempting to avoid inefficiency on a variety of CPU models. AVX2 architecture evidently is designed to handle several such operations without concern about alignment.
The difference between SSE4.1 and 4.2? According to my observation, the compiler is more likely to consider vectorization with split memory accesses to remove alignment issues, when using this option. Several Intel CPU models where the architecture manuals recommend against split memory accesses do in fact benefit from them when the alignments vary.
0 Kudos
jimdempseyatthecove
Honored Contributor III
322 Views
Intel Visual Fortran Composer XE 2011 Update 10 Integration for Microsoft Visual Studio* 2010, 12.1.3530.2010,

I spoke too quickly.

Bad code is generated for the CROSS_ymm_ymm_ymm.

Wrong results are being returned.

Interestingly the .ASM file is different from the Dissassembly window (listing produced in same compilation that built execuitable).

The .ASM file shows an error:

[plain]Assembler listing file (line numbers and comments squished out) Comments added after ; CROSS_YMM_YMM_YMM PROC ; parameter 1(VA): rcx ; parameter 2(VB): rdx ; parameter 3(VC): r8 ;;; RECURSIVE SUBROUTINE CROSS_ymm_ymm_ymm(VA,VB,VC) push rbp ; save rbp sub rsp, 176 ; reserve stack space lea rbp, QWORD PTR [32+rsp] ; initialize new rbp mov QWORD PTR [128+rbp], r13 ; save r13 lea r13, QWORD PTR [31+rbp] ; point r13 at stack temp for VS(1:3) ;;; VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v vmovupd xmm2, XMMWORD PTR [64+rdx] ; xmm2 = VB(3).v(0:1) vmovupd xmm1, XMMWORD PTR [32+rcx] ; xmm1 = VA(2).v(0:1) vmovupd xmm5, XMMWORD PTR [64+rcx] ; xmm5 = VA(3).v(0:1) and r13, -32 ; align r13 stack temp for VS(1:3) vinsertf128 ymm0, ymm2, XMMWORD PTR [80+rdx], 1 ; ymm0 = VB(3).v(0:3) vmovupd xmm2, XMMWORD PTR [32+rdx] ; xmm2 = VB(2).v(0:1) vinsertf128 ymm2, ymm2, XMMWORD PTR [48+rdx], 1 ; ymm2 = VB(2).v(0:3) vinsertf128 ymm1, ymm1, XMMWORD PTR [48+rcx], 1 ; ymm1 = VA(2).v(0:3) vinsertf128 ymm4, ymm5, XMMWORD PTR [80+rcx], 1 ; ymm4 = VA(3).v(0:3) vmulpd ymm3, ymm1, ymm0 ; ymm3 = VA(2).v * VB(3).v vmulpd ymm5, ymm4, ymm2 ; ymm5 = VA(3).v * VB(2).v vsubpd ymm3, ymm3, ymm5 ; ymm3 = VA(2).v*VB(3).v - VA(3).v*VB(2).v vmovupd YMMWORD PTR [r13], ymm3 ; VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v ;;; VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v vmovupd xmm3, XMMWORD PTR [rdx] ; xmm3 = VB(1).v(0:1) vinsertf128 ymm3, ymm3, XMMWORD PTR [16+rdx], 1 ; ymm3 = VB(1).v(0:3) vmulpd ymm5, ymm4, ymm3 ; ymm5 = VA(3).v * VB(1).v vmovupd xmm4, XMMWORD PTR [rcx] ; ymm4 = VA(1).v(0:1) ;;; VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v vmulpd ymm1, ymm1, ymm3 ; ymm1 = VA(2).v * VB(1).v vinsertf128 ymm4, ymm4, XMMWORD PTR [16+rcx], 1 ; ymm4 = VA(1).v(0:3) vmulpd ymm0, ymm4, ymm0 ; ymm0 = VA(1).v * VB(3).v vmulpd ymm2, ymm4, ymm2 ; ymm2 = VA(1).v * VB(2).v ;;; VC(1).v = VS(1).v vmovupd xmm4, XMMWORD PTR [r13] ; xmm4 = VS(1).v(0:1) vmovupd XMMWORD PTR [r8], xmm4 ; VC(1).v(0:1) = VS(1).v(0:1) vsubpd ymm0, ymm5, ymm0 ; VS(2).v = ymm0 = VA(3).v * VB(1).v - VA(1).v * VB(3).v vsubpd ymm3, ymm2, ymm1 ; VS(3).v = ymm3 = VA(1).v * VB(2).v - VA(2).v * VB(1).v ;;; VC(2).v = VS(2).v ;;; VC(3).v = VS(3).v vmovupd xmm5, XMMWORD PTR [16+r13] ; xmm5 = VS(1).v(2:3) vmovupd XMMWORD PTR [16+r8], xmm5 ; VC(1).v(2:3) = VS(1).v(2:3) *** next one bad *** vmovupd XMMWORD PTR [48+r8], xmm1 ; VC(2).v(2:3) = VA(2).v(2:3) * VB(1).v(2:3) *** next one bad *** vmovupd XMMWORD PTR [64+r8], xmm2 ; VC(3).v(0:1) = VA(1).v(0:1) * VB(2).v(0:1) vmovupd YMMWORD PTR [32+r13], ymm0 ; VS(2).v = VS(2).v vmovupd YMMWORD PTR [64+r13], ymm3 ; VS(3).v = VS(3).v vmovupd XMMWORD PTR [32+r8], xmm0 ; VC(2).v(0:1) = VS(2).v(0:1) *** next one bad *** vmovupd XMMWORD PTR [80+r8], xmm3 ; VC(3).v(2:3) = VS(3).v(0:1) ;;; END SUBROUTINE CROSS_ymm_ymm_ymm mov r13, QWORD PTR [128+rbp] ;104.1 vzeroupper ;104.1 lea rsp, QWORD PTR [144+rbp] ;104.1 pop rbp ;104.1 ret ;104.1 [/plain]
I think my commens on right side are correct.

The dissassembly uses different registers, .AND. generates bad code:

[plain]RECURSIVE SUBROUTINE CROSS_ymm_ymm_ymm(VA,VB,VC) 000000013FBDFDC0 push rbp ; save frame pointer 000000013FBDFDC1 sub rsp,0B0h ; reserve stack 000000013FBDFDC8 lea rbp,[rsp+20h] ; new framepointer 000000013FBDFDCD mov qword ptr [rbp+80h],r13 ; save r13 000000013FBDFDD4 lea r13,[rbp+1Fh] ; r13 = somewhere near VS(1:3) VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v 000000013FBDFDD8 vmovupd xmm2,xmmword ptr [rdx+40h] ; xmm2 = VB(3).v(0:1) 000000013FBDFDDD vmovupd xmm1,xmmword ptr [rcx+20h] ; xmm1 = VA(2).v(0:1) 000000013FBDFDE2 vmovupd xmm5,xmmword ptr [rcx+40h] ; xmm5 = VA(3).v(0:1) 000000013FBDFDE7 and r13,0FFFFFFFFFFFFFFE0h ; r13 = VS(1:3) VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v 000000013FBDFDEB vinsertf128 ymm0,ymm2,xmmword ptr [rdx+50h],1 ; ymm0 = VB(3).v(0:3) 000000013FBDFDF2 vmovupd xmm2,xmmword ptr [rdx+20h] ; xmm2 = VB(2).v(0:1) 000000013FBDFDF7 vinsertf128 ymm2,ymm2,xmmword ptr [rdx+30h],1 ; ymm2 = VB(2).v(0:3) 000000013FBDFDFE vinsertf128 ymm1,ymm1,xmmword ptr [rcx+30h],1 ; ymm1 = VA(2).v(0:3) 000000013FBDFE05 vinsertf128 ymm4,ymm5,xmmword ptr [rcx+50h],1 ; ymm4 = VA(3).v(0:3) 000000013FBDFE0C vmulpd ymm3,ymm1,ymm0 ; ymm3 = VA(2).v * VB(3).v 000000013FBDFE10 vmulpd ymm5,ymm4,ymm2 ; ymm5 = VA(3).v * VB(2).v 000000013FBDFE14 vsubpd ymm3,ymm3,ymm5 ; ymm3 = VA(2).v * VB(3).v - VA(3).v * VB(2).v 000000013FBDFE18 vmovupd ymmword ptr [rbp],ymm3 ; VS(1) = VA(2).v * VB(3).v - VA(3).v * VB(2).v VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v 000000013FBDFE1E vmovupd xmm3,xmmword ptr [rdx] ; xmm3 = VB(1).v(0:1) 000000013FBDFE22 vinsertf128 ymm3,ymm3,xmmword ptr [rdx+10h],1 ; ymm3 = VB(1).v(0:3) 000000013FBDFE29 vmulpd ymm5,ymm4,ymm3 ; ymm5 = VA(3).v * VB(1).v 000000013FBDFE2D vmovupd xmm4,xmmword ptr [rcx] ; xmm4 = VA(1).v(0:1) VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v 000000013FBDFE31 vmulpd ymm1,ymm1,ymm3 ; ymm1 = VA(2).v * VB(1).v VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v 000000013FBDFE35 vinsertf128 ymm4,ymm4,xmmword ptr [rcx+10h],1 ; ymm4 = VA(1).v(0:3) 000000013FBDFE3C vmulpd ymm0,ymm4,ymm0 ; ymm0 = VA(1).v * VB(3).v VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v 000000013FBDFE40 vmulpd ymm2,ymm4,ymm2 ; ymm2 = VA(1).v * VB(2).v VC(1).v = VS(1).v 000000013FBDFE44 vmovupd xmm4,xmmword ptr [rbp] ; xmm4 = VS(1).v(0:1) 000000013FBDFE4A vmovupd xmmword ptr [rax],xmm4 ; VC(1).v(0:1) = VS(1).v(0:1) VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v 000000013FBDFE4F vsubpd ymm0,ymm5,ymm0 ; VS(2).v = ymm0 = VA(3).v * VB(1).v - VA(1).v * VB(3).v VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v 000000013FBDFE53 vsubpd ymm3,ymm2,ymm1 ; VS(3).v = ymm3 = VA(1).v * VB(2).v - VA(2).v * VB(1).v VC(1).v = VS(1).v 000000013FBDFE57 vmovupd xmm5,xmmword ptr [rbp+10h] ; xmm5 = VS(1).v(2:3) 000000013FBDFE5D vmovupd xmmword ptr [rax+10h],xmm5 ; VC(1).v(2:3) = VS(1).v(2:3) VC(2).v = VS(2).v ************ the following is bad ************** 000000013FBDFE63 vmovupd xmmword ptr [rax+30h],xmm1 ; VC(2).v(2:3) = VA(2).v(0:1) * VB(1).v(0:1) VC(3).v = VS(3).v ************ the following is bad ************** 000000013FBDFE69 vmovupd xmmword ptr [rax+40h],xmm2 ; VC(3).v(0:1) = VA(1).v(0:1) * VB(2).v(0:1) VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v *** copy to RAM not necessary ************** 000000013FBDFE6F vmovupd ymmword ptr [rbp+20h],ymm0 ; VS(2).v (ram) = VS(2).v (reg) VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v *** copy to RAM not necessary ************** 000000013FBDFE75 vmovupd ymmword ptr [rbp+40h],ymm3 ; VS(3).v (ram) = VS(3).v (reg) VC(2).v = VS(2).v 000000013FBDFE7B vmovupd xmmword ptr [rax+20h],xmm0 ; VC(2).v(0:1) = VS(2).v(0:1) VC(3).v = VS(3).v ************ the following is bad ************** 000000013FBDFE81 vmovupd xmmword ptr [rax+50h],xmm3 ; VC(3).v(2:3) = VS(3).v(0:1) END SUBROUTINE CROSS_ymm_ymm_ymm 000000013FBDFE87 mov r13,qword ptr [rbp+80h] 000000013FBDFE8E vzeroupper 000000013FBDFE91 lea rsp,[rbp+90h] 000000013FBDFE98 pop rbp 000000013FBDFE99 ret [/plain]
I can compile this file with optimizations off (and lose vectorization).

Jim Dempsey
0 Kudos
jimdempseyatthecove
Honored Contributor III
322 Views
These are the option settings (bold may be pertinant):

/nologo
/debug:full
/QxAVX
/fpp
/I"..\..\Programs\GlobalData\x64\DebugOpenMPFast"
/I"..\..\Programs\A_Modules\x64\DebugOpenMPFast"
/I"..\..\Source\Modules"
/I"..\..\Source\A_AVFRT"
/I"C:\gtoss\Programs\GlobalData\x64\DebugOpenMPFast"
/I"C:\gtoss\Programs\A_Modules\x64\DebugOpenMPFast"
/D_AvFRT /D_MOD
/arch:AVX
/Qopenmp
/real_size:64

/fpe:1
/module:"x64\DebugOpenMPFast\"
/object:"x64\DebugOpenMPFast\"
/Fd"x64\DebugOpenMPFast\vc100.pdb"
/traceback
/check:none
/libs:static
/threads
/dbglibs
/libdir:noauto
/c

Jim Dempsey
0 Kudos
jimdempseyatthecove
Honored Contributor III
322 Views
>> the way to go if the alignments aren't known

Currently I have not used ALIGN attributes. But this brings up an interesting point and suggestion.

Generic interfaces select on argument type and number. Should the disambiguation include ALIGN then the programmer can elect to write two variants of the same code. One for "unknown" and one for ALIGN

interface CROSS
subroutine CROSS_ymm_ymm_ymm(A,B,C)
type(TypeYMM) :: A(3), B(3), C(3)
end subroutine CROSS_ymm_ymm_ymm

subroutine CROSS_ymm_ymm_ymm_ALIGN(A,B,C)
!DEC$ ATTRIBUTES ALIGN: 32::A
!DEC$ ATTRIBUTES ALIGN: 32::B
!DEC$ ATTRIBUTES ALIGN: 32::C
type(TypeYMM) :: A(3), B(3), C(3)
end subroutine CROSS_ymm_ymm_ymm_ALIGN
end interface

When the compiler can verify all args are aligned to 32 (64, 128, ...) then it will generate a call to the user specified aligned subroutine, when any args have unknown alignment then the subroutine without alignment would be called. The user could extend this to all permutaitons of alignment.

Jim Dempsey

0 Kudos
jimdempseyatthecove
Honored Contributor III
322 Views
Also try this variation
[fortran]RECURSIVE SUBROUTINE CROSS_ymm_ymm_ymm(VA,VB,VC) use MOD_ALL type(TypeYMM) :: VA(3), VB(3), VC(3) type(TypeYMM), automatic :: VS(3) if((loc(VC) .eq. loc(VA)) .or. (loc(VC) .eq. loc(VB))) then ! COMPUTE THE CROSS PRODUCT !DEC$ VECTOR ALWAYS VS(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v !DEC$ VECTOR ALWAYS VS(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v !DEC$ VECTOR ALWAYS VS(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v ! TRANSFER THE CROSS PRODUCT RESULT TO THE OUTPUT VECTOR !DEC$ VECTOR ALWAYS VC(1).v = VS(1).v !DEC$ VECTOR ALWAYS VC(2).v = VS(2).v !DEC$ VECTOR ALWAYS VC(3).v = VS(3).v else ! COMPUTE THE CROSS PRODUCT without temps !DEC$ VECTOR ALWAYS VC(1).v = VA(2).v*VB(3).v - VA(3).v*VB(2).v !DEC$ VECTOR ALWAYS VC(2).v = VA(3).v*VB(1).v - VA(1).v*VB(3).v !DEC$ VECTOR ALWAYS VC(3).v = VA(1).v*VB(2).v - VA(2).v*VB(1).v endif END SUBROUTINE CROSS_ymm_ymm_ymm [/fortran]
The else clause is entered when destination is neither source.
In inspecting the code generated (in .ASM listing), the else clause continues to us the stack temp VS.

From the original code, I also tried making the temp VS(2) then made the 3rd expression directly target VC(3). IOW eliminate one temp. This too produced bad code.

I can run this without optimization. So this is not a show stopper.

Jim
0 Kudos
Reply