Udostępnij za pośrednictwem


HPC Pack SOA Tutorial IV – Common Data

With multiple tasks running on an HPC cluster, often, many tasks work with a shared data set. For example, a stock risk analysis program can run a large number of simulations against a set of historical stock market data. In this example, each SOA request can have different parameters, but all analyses use the same historical data. In HPC, the shared data set is called common data.

Common data must be transferred to services running on compute nodes. Since the data is static, it will be inefficient to transfer the data within each SOA request. A good solution is to send the data set to the cluster and store it in a centralized place, where all services have access to. In this blog post, we’ll see how to do it. (Note: You could also store this data in a database that that HPC cluster has access to.)

We will still be using our prime factorization example (see this blog post for details). To accelerate the algorithm, we want to employ a prime number table, so that we can look up the table for prime factors instead of traversing all numbers. Obviously, a prime number table should be common data among all factorization requests. We’ll follow these steps to handle it:

1. Implement a data manager

a. Send common data

b. Release common data

2. Implement the service and get common data into the service

3. Implement the client

4. Test the common data service

We also discuss how to configure common data storage.

See the accompanying code sample to follow the steps in this article.

1. Implement a data manager

First, we create a data manager to handle the common data’s lifecycle.

First, generate a prime number table as our common data. The implementation of CreatePrimeNumberTable is not relevant to our topic, so we won’t go into detail here.

//create prime number table of 200000 prime numbers

List<int> PrimeNumberTable = CreatePrimeNumberTable(200000);

Common data is managed by the DataClient type in HPC. To create a data client, we should prepare the following information.

conststring headnode = "head.contoso.com";

string dataId = "PRIME_NUMBER_TABLE";

 

The data ID is used to identity each data client, so it must be a unique value.

Create a data client:

//create DataClient to send data

DataClient dataClient = DataClient.Create(headnode, dataId);

 

If a data client with same ID already exists, an exception will be thrown. In this case, if the existing common data is still needed, we’ll have to change our data ID; otherwise, just delete it as follows:

//delete the data client

DataClient.Delete(headnode, dataId);

 

Send the common data to the service by simply invoking WriteAll method.

//Send data to service.

//WriteAll() can only be called once on a DataClient object

dataClient.WriteAll<List<int>>(PrimeNumberTable);

 

We use WriteAll to send structured data to the cluster. Any serializable data can be sent this way.

Also, raw data can be sent by invoking WriteRawBytesAll method. For example, if we get prime numbers directly from a file PrimeNumbers, we can send data like this:

//WriteAllRawBytesAll can be called only once per data client.

dataClient.WriteRawBytesAll(File.ReadAllBytes("PrimeNumbers"));

 

However, WriteAll or WriteRawBytesAll can be called only once per data client. On each data client, only one “write” operation is allowed, but multiple “read” operations can be performed.

Typically, we send common data in the client and read them in the services. In this scenario, the common data is read-only to services.

By now, the prime number table has been sent to the HPC cluster, and all the service requests can access it by providing the correct data ID. The data will remain in the cluster until it is explicitly deleted.

2. Implement the service and get common data into the service

When implementing our service, the service contract remains the same.

As mentioned before, we can get the data client through the data ID, and get common data by invoking the ReadAll method.

List<int>PrimeNumberTable;

using (DataClient dataClient = ServiceContext.GetDataClient("PRIME_NUMBER_TABLE"))

{

  PrimeNumberTable = dataClient.ReadAll<List<int>>();

}

 

Or use the ReadRawBytesAll method to send raw data.

    byte[] PrimeNumberTableRaw;

    using(DataClient dataClient =

    ServiceContext.GetDataClient("PRIME_NUMBER_TABLE"))

    {

        PrimeNumberTableRaw = dataClient.ReadRawBytesAll();

    }

To avoid reading common data into memory every time a new service object is created, reading it in a static constructor is a good practice.

With the prime number table, factorization can be implemented in a quicker way like this.

    public List<int>Factorize(intn)

    {

        List<int> factors = newList<int>();

       //When factors are in PrimeNumberTable

        for (inti = 0; i < PrimeNumberTable.Count; )

        {

            if(n % i == 0)

            {

                factors.Add(i);

                n /= i;

            }

            else

            {

                i++;

            }

        }

       //When factors are not in PrimeNumberTable

        for (inti = PrimeNumberTable.Max() + 1; i <= n; )

        {

            if(n % i == 0)

            {

                factors.Add(i);

                n /= i;

            }

            else

            {

                i++;

            }

        }

        return factors;

    }

 

3. Implement the client

We’ll create a simple client just to test the service.

//Change headnode here

    const string headnode = "head.contoso.com";

    conststring serviceName = "PrimeFactorizationWithCommonData";

    SessionStartInfo info = newSessionStartInfo(headnode, serviceName);

    Random random = newRandom();

    try

    {

        //create an interactive session

        using (Session session = Session.CreateSession(info))

        {

            Console.WriteLine("Session {0} has been created", session.Id);

            using (BrokerClient<IPrimeFactorization>client = newBrokerClient<IPrimeFactorization>(session))

            {

                //send request

                int num = random.Next(1, Int32.MaxValue);

                FactorizeRequest request = newFactorizeRequest(num);

                client.SendRequest<FactorizeRequest>(request, num);

                client.EndRequests();

 

                //get response

                foreach (BrokerResponse<FactorizeResponse>response in client.GetResponses<FactorizeResponse>())

                {

                    int number = response.GetUserData<int>();

                    int[] factors = response.Result.FactorizeResult;

                    Console.WriteLine("{0} = {1}", number,

                        string.Join<int>(" * ", factors));

                }

            }

            session.Close();

            Console.WriteLine("done");

            Console.WriteLine("Press any key to exit");

            Console.ReadKey();

        }

    }

    catch (System.Exceptionex)

    {

        Console.WriteLine(ex.Message);

    }

 

4. Test the common data service

Run DataManager.exe to send common data to the cluster.

clip_image012

Now start Client.exe. We can see that the service runs correctly.

clip_image014

5. Configure common data storage

Common data is stored in a shared folder which can be accessed by all compute nodes. This path is determined by the environment variable HPC_RUNTIMESHARE. You can check it by using HPC Pack command line tool cluscfg .

Type cluscfg listenvs in a command window, and you’ll see a list of environment variables. By default, the value of HPC_RUNTIMESHARE is \\COMPUTE_NAME\Runtime$. This is a shared path, which is mapped to C:\HPCRuntimeDirectory on the head node.

If you need to, you can change the value of HPC_RUNTIMESHARE by using cluscfg setenvs. Ensure that all compute nodes have read access to this path, and clients should have write access if they need to send common data.

For more information, see Configuring the Runtime Data Share.