Previous
About HAP: Parallel Computation
next

Homological computations can require much time and memory.  In principle, many of the computations can be distributed in fairly obvious ways over several processors on one or more PCs. HAP contains a few basic functions for achieving this.
The following commands compute the third integral homology groups of the fourteen groups of order 16 using two child processes. The first child is on the local machine and the second child is on a remote machine for which ssh has been configured to require no password. This is an example of "trivial parallelism". On a multiprocessor machine distinct child processes would normally automatically be assigned to different processors.
gap> s:=ChildProcess();;
gap> t:=ChildProcess("alberti.vlan.nuigalway.ie");;

gap> ChildCommand("Odd:=List([1,3,5,7,9,11,13],i->GroupHomology(SmallGroup(16,i),3));",s);
gap> ChildCommand("Even:=List([2,4,6,8,10,12,14],i->GroupHomology(SmallGroup(16,i),3));",t);
gap> ODD:=ChildGet("Odd",s);;
gap> EVEN:=ChildGet("Even",t);;

gap> for i in [1..7] do
gap> Print("Group ",2*i-1," has third homology ",ODD[i],"\n");
gap> Print("Group ",2*i," has third homology ",EVEN[i],"\n");
gap> od;

Group 1 has third homology [ 16 ]
Group 2 has third homology [ 4, 4, 4 ]
Group 3 has third homology [ 2, 2, 4, 4 ]
Group 4 has third homology [ 2, 4, 4 ]
Group 5 has third homology [ 2, 2, 8 ]
Group 6 has third homology [ 2, 8 ]
Group 7 has third homology [ 2, 2, 8 ]
Group 8 has third homology [ 2, 8 ]
Group 9 has third homology [ 16 ]
Group 10 has third homology [ 2, 2, 2, 2, 2, 2, 4 ]
Group 11 has third homology [ 2, 2, 2, 2, 2, 2, 4 ]
Group 12 has third homology [ 2, 2, 2, 8 ]
Group 13 has third homology [ 2, 2, 2, 8 ]
Group 14 has third homology [ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2 ]
A weakness with the above parallel computation is that the naive distribution of tasks over the two child processes might result in one process being finished long before the other.  The command NextAvailableChild() can be used to overcome this problem. Alternatively, the function ParallelList() can be used.

The following commands, on a two-processor IBM laptop, show that parallel computation can halve the time to compute the second homology of the groups of order 128.
gap> s:=ChildProcess();;
gap> t:=ChildProcess();;
gap> fn:=function(i);return GroupHomology(SmallGroup(128,i),2);end;
gap> ChildPut( fn, "fn", s);
gap> ChildPut( fn, "fn", t);

gap> Exec("date");
Sat Oct  6 23:52:55 IST 2007
gap> Lparallel:=ParallelList([1..2328],"fn",[s,t]);;
gap> Exec("date");
Sun Oct  7 00:00:37 IST 2007
gap> L:=List([1..2328], fn);;
gap> Exec("date");
Sun Oct  7 00:14:07 IST 2007
The following commands illustrate a naive parallel computation of the product A=M*N of two random matrices M and N. The computation involves just one child and takes 64% of the time needed for the standard sequential calculation of M*N.
gap> M:=RandomMat(2000,2000);;                     #Two random matrices.
gap> N:=RandomMat(2000,2000);;

gap> s:=ChildProcess();;

gap> Exec("date");
Tue Nov 27 05:17:55 GMT 2007
gap> Mtop:=M{[1..1000]};;                                   #Splitting the first matrix in two
gap> Mbottom:=M{[1001..2000]};;

gap> ChildPut(Mtop,"Mtop",s);                              #Transferring data to the child.
gap> ChildPut(N,"N",s);
gap> NextAvailableChild([s]);;

gap> ChildCommand("Atop:=Mtop*N;;",s);;       #Matrix multiplication on child.
gap> Abottom:=Mbottom*N;;                               #Matrix multiplication on parent.
gap> A:=ChildGet("Atop",s);;
gap> Append(A,Abottom);;                                     #A is the product M*N.
gap> Exec("date");
Tue Nov 27 05:18:47 GMT 2007
The parallel multiplication took 52 seconds on a two-processor IBM laptop, compared to 81 seconds below for multiplication on a single processor. This is a speedup factor of 0.64 .
gap> Exec("date");M*N;;Exec("date");
Tue Nov 27 05:13:56 GMT 2007
Tue Nov 27 05:15:17 GMT 2007
The following commands show how two processors could be used to compute H8(G,Z) for the group G=SmallGroup(512,1201). The commands begin with the definition of the function

ParallelResolution(G,N,n,t).

Unfortunately, this example is too naive: the loads on the two processors is extremely unbalanced.
#################################################
ParallelResolution:=function(G,N,n,t)
local
        R,S,T,nat,iso;

        nat:=NaturalHomomorphismByNormalSubgroup(G,N);
        iso:=IsomorphismPermGroup(N);            #HAPPrintTo() and HAPRead() will only work with
        ChildPut(n,"n",t);                                       #permutation or matrix groups.
        ChildPut(Image(iso),"N",t);
        ChildCommand("R:=ResolutionFiniteGroup(N,n);",t);
        ChildCommand("HAPPrintTo(\"/tmp/test\",R);",t);

        S:=ResolutionFiniteGroup(Image(nat),n);
      
        while not IsAvailableChild(t) do               #This should ensure that the next command is
        od;                                                              #not processed too soon!

        R:=HAPRead("/tmp/test");
        R!.group:=N;
        R!.elts:=List(R!.elts,x->PreImagesRepresentative(iso,x));

        T:=ResolutionFiniteExtension(
                                                           GeneratorsOfGroup(G),
                                                           List(GeneratorsOfGroup(G),x->Image(nat,x)),
                                                           S,n,false,R);
        return T;
end;
#################################################


G:=SmallGroup(512,1201);;
phi:=NaturalHomomorphismByNormalSubgroup(G,Centre(G));;
t:=ChildProcess();;
T:=ParallelResolution(G,Centre(G),9,t);;
C:=TensorWithIntegers(T);;

Homology(C,8);
[ 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 4, 4, 4, 8 ]
Previous Page
Contents
Next page