This is a blog for MUDA development. MUDA is a (short) vector language for CPUs.

Thursday, February 28, 2008

Logical op in LLVM

In MUDA, taking logical op for floating point type are possible.


vec and_func(vec a, vec b)
{
return a & b;
}


This syntax computes, for example, as follows.


// a = (0x3f800000, 0x3f800000, 0x3f800000, 0x3f800000) = (1.0f, 1.0f, 1.0f, 1.0f)
// b = (0xffffffff, 0x00000000, 0x00000000, 0xf0f00000)
// a & b = (0x3f800000, 0x00000000, 0x00000000, 0x30800000)


but In LLVM IR, logical instruction does not accept floating point type for its operand, thus if we want to take a logical op on floating point values,
first we have to change the type of variable from float to integer, without changing its content.

For this purpose, LLVM IR provides bitcast op.

MUDA's LLVM IR backend emits following code againt above input MUDA code.


define <4xfloat> @and_func (<4xfloat> %a, <4xfloat> %b)
{
%a.addr = alloca <4xfloat> ;
store <4xfloat> %a, <4xfloat>* %a.addr ;
%b.addr = alloca <4xfloat> ;
store <4xfloat> %b, <4xfloat>* %b.addr ;
%t_vec2 = load <4xfloat>* %a.addr ;

%t_vec3 = load <4xfloat>* %b.addr ;

%t_ivec4 = bitcast <4xfloat> %t_vec2 to <4xi32> ;
%t_ivec5 = bitcast <4xfloat> %t_vec3 to <4xi32> ;
%t_ivec6 = and <4xi32> %t_ivec4 , %t_ivec5 ;
%t_vec1 = bitcast <4xi32> %t_ivec6 to <4xfloat> ;
ret <4xfloat> %t_vec1 ;
}

Generated LLVM IR code is somewhat redundant, but LLVM optimizer and x86 backend emits exactly what I expected.


$ llvm-as tmp.ll -f; opt -std-compile-opts tmp.bc -f | llc -march=x86
.text
.align 4,0x90
.globl _and_func
_and_func:
andps %xmm1, %xmm0
ret

.subsections_via_symbols


LLVM IR is mapped to just one andps instruction. LLVM is so nice!

Friday, February 8, 2008

Work in progress | MUDA -> LLVM backend

I've started to implement LLVM IR backend for MUDA.

http://lucille.svn.sourceforge.net/viewvc/lucille/angelina/haskellmuda/CodeGenLLVM.hs?view=markup


MUDA's LLVM IR backend is still work in progress.
First, I've won success on simple case.

Here's MUDA input code,



// input.mu
vec
func( vec a, vec b ) {

return a + b;

}


MUDA's current LLVM backend emits,


$ mudah --llvm input.mu > tmp.ll
$ cat tmp.ll

;;
;; The following code was generated by MUDA compiler
;;
target datalayout = "i32:128:128-f32:128:128"

define <4xfloat> @func (<4xfloat> %a, <4xfloat> %b)
{
%a.addr = alloca <4xfloat> ;
store <4xfloat> %a, <4xfloat>* %a.addr ;
%b.addr = alloca <4xfloat> ;
store <4xfloat> %b, <4xfloat>* %b.addr ;
%t_vec1 = load <4xfloat>* %a.addr ;

%t_vec2 = load <4xfloat>* %b.addr ;

%t_vec3 = add <4xfloat> %t_vec1 , %t_vec2 ;
ret <4xfloat> %t_vec3 ;
}



The LLVM code generated in MUDA -> LLVM IR backend is straightforward and somewhat redundant.

Try to get native code with optimization by LLVM midend and backend,


$ llvm-as tmp.ll -f
$ opt -std-compile-opts -f tmp.bc -o tmp.opt.bc
$ llc tmp.opt.bc -f
$ cat tmp.opt.s

.text
.align 4,0x90
.globl _func
_func:
addps %xmm1, %xmm0
ret

.subsections_via_symbols



This is exactly what I want to get in x86 assembler(Just one addps instruction),
LLVM(and it's x86 backend) rocks!

Sunday, February 3, 2008

MUDA blog lanuched.

I've decided to launch MUDA blog site separately.

And I've finished almost basic implementation of MUDA language.
If you want to play with MUDA, go

http://lucille.sourceforge.net/muda/


And check out the current svn tree.

Example & Documentation will be updated soon.

Here is TODOs

  • LLVM IR backend(near furure)
  • math library for MUDA(near future)
  • Automatic optimzation(middle future)
  • Formal verification of computation code by gappa(middle future)

About

My life to be a renderer writer & quant. Here is my main blog. http://lucille.atso-net.jp/blog/