共用方式為


Multiple Simulated Sensors

Glossary Item Box

Microsoft Robotics Developer Studio Send feedback on this topic

Multiple Simulated Sensors

This tutorial builds on the Apartment Scene tutorial, so it is recommended that one completes that tutorial before moving onto this one. This tutorial begins where the previous tutorial ended.

This tutorial teaches you how to:

  • Creating the scene
  • Partnering with additional services
  • Reading and displaying data from the sensors
  • Adding the new service to the manifest in DSSMe

This tutorial is provided in the C# language. You can find the project files for this tutorial at the following location under the Microsoft Robotics Developer Studio installation folder:

Samples\SimulationTutorials\Advanced\Multiple Simulated Sensors

Creating the scene

Before adding the new services to our scene, we will need additional entities so the services can communicate with the simulator. Run the Simulation Empty Project tutorial, load in the apartment scene ("File" -> "Open Scene..." -> "samples\config\Apartment.SimulationEngineState.xml"), and start the EntityUI manifest ("File" -> "Open Manifest..." -> "samples\Config\EntityUI.manifest.xml"). Now add a Motor Base with all the sensors selected as shown below.

Multiple Simulated Sensors

Multiple Simulated Sensors - Adding the MotorBase with several sensors

After adding the entities, save the scene ("File" -> "Save Scene As...") so we can open it in DSSMe later on.

Partnering with additional services

This tutorial uses eight different services, so one of the easier ways to generate the code for this service is through the DSS New Service Visual Studio project wizard (explained in the "Simulation Empty Project" tutorial).

The services this tutorial uses are listed below.

  • Color sensor - allows the robot to sense roughly the approximate color in front of it
  • Brightness sensor - allows the robot to sense roughly the approximate brightness in front of it
  • Compass - allows the robot to sense the direction it is oriented (in degrees) with respect to North (+Z axis in the simulator)
  • LaserRangeFinder - allows for determining the distance to objects in front of the LaserRangeFinder
  • Sonar - similar to the LaserRangeFinder, but with a wider cone that the sensor can sense
  • Infrared - allows for sensing objects directly in front of the sensor, but with very limited range
  • SimulatedWebcam - allows for issuing drive commands to the robot
  • DifferentialDriveService - allows for issuing drive commands to the robot

See the below picture before to make sure you have all the services added before closing the wizard. Also, remember to set the "Creation policy" to "UsePartnerListEntry" for all the services except the SimulationEngine service. This will allow us to easily add and associate sensors with multiple robots in later tutorials.

Multiple Simulated Sensors

Multiple Simulated Sensors - Using the DSS New Service (2.0) Wizard

After the DSS New Service wizard generates your project, you will need to make two slight modifications to allow the service to build correctly. Search for "using drive = Microsoft.Robotics.Services.Simulation.Drive.Proxy;" and replace it with "using simulateddrive = Microsoft.Robotics.Services.Simulation.Drive.Proxy;". Also search for "using sonar = Microsoft.Robotics.Services.Simulation.Sensors.Sonar.Proxy;" and replace it with "using simsonar = Microsoft.Robotics.Services.Simulation.Sensors.Sonar.Proxy;".

The reason for needing to make the replacements is when a simulated service has the same name as a non-simulated service, the DSS New Service wizard will generate two using statements with the same names. We need to make these names unique for the service to build correctly.

Reading and displaying data from the sensors

Open the Visual Studio project that was generated with the DSS New Service wizard. It would be a good idea to keep the Apartment Scene tutorial open while working on this service, as some of the code will be similar to code in the Apartment Scene tutorial.

(Optional) Creating the UI to display the sensor readings

The UI is for this tutorial is shown in the picture below. If you are unfamiliar with windows forms programming, you can omit this step and output sensor readings to the console.

Multiple Simulated Sensors

Multiple Simulated Sensors - Creating the WinForm

The dashed boxes below the "Webcam Display", "Laser Range Finder Distance Measurements", and "Sonar Distance Measurements" are all panels. The box below "Analog Sensor Readings" is a property grid. It should be fairly straight forward to drag the various controls onto the WinForm from the Toolbox tab in Visual Studio.

There are many ways to display data on a WinForm, so the full source is omitted here to keep the tutorial clear and relevant to RDS. However, you can find the source for the WinForm in "samples\SimulationTutorials\Advanced\Multiple Simulated Sensors\ImageProcessingResultForm.cs".

Updating sensor data

As in the previous tutorial, the service needs some additional data members.

Port<DateTime> _dateTimePort = new Port<DateTime>();

// used to display gradient we compute
ImageProcessingResultForm _imageProcessingForm;

Also as in the previous tutorial, we also need to run the WinForm and activate the first timer.

WinFormsServicePort.Post(new RunForm(() =>
{
    _imageProcessingForm = new ImageProcessingResultForm();
    _imageProcessingForm.Show();

    Activate(Arbiter.ReceiveWithIterator(false, _dateTimePort, UpdateSensorData));
    TaskQueue.EnqueueTimer(TimeSpan.FromMilliseconds(60), _dateTimePort);

    return _imageProcessingForm;
}));

Updating sensor data is done in a similar fashion to the method used in the Apartment Scene tutorial. Again we use a timer the queries the sensors every 60 milliseconds. The UpdateSensorData method is slightly modified since we read data from seven sensors instead of one sensor.

Insert the following method.

IEnumerator<ITask> UpdateSensorData(DateTime dateTime)
{
    var resultPort = new CompletionPort();
    PostOnTaskCompletion(resultPort, UpdateColorSensor);
    PostOnTaskCompletion(resultPort, UpdateBrightnessSensor);
    PostOnTaskCompletion(resultPort, UpdateCompass);
    PostOnTaskCompletion(resultPort, UpdateLRF);
    PostOnTaskCompletion(resultPort, UpdateSonar);
    PostOnTaskCompletion(resultPort, UpdateInfrared);
    PostOnTaskCompletion(resultPort, UpdateWebCamImage);

    Activate(Arbiter.MultipleItemReceive(false, resultPort, 7, allComplete =>
        {
            Activate(Arbiter.ReceiveWithIterator(false, _dateTimePort, UpdateSensorData));
            TaskQueue.EnqueueTimer(TimeSpan.FromMilliseconds(60), _dateTimePort);
        }));

    yield break;
}

Note the call to PostOnTaskCompletion(). Previously in the Apartment Scene tutorial, we could just "yield return" to a single IterativeTask because there was only one of them. However, if we used the same method here, we would end up logically blocking on each yield return. Since the sensor queries can be performed concurrently (asynchronously), there is no need to perform them in a synchronous fashion. However, we don't want to activate another timer until all the sensor queries are finished. To accomplish this, we use a MultipleItemReceive that waits for all the tasks to complete. This allows the sensors to be queried concurrently, but also prevents us from accumulating too many messages on the service's TaskQueue, which is exactly what we wanted to accomplish.

Querying each of sensors is done if a fairly similar fashion. For the SimulatedWebcam, a QueryFrame() message is posted to get the image of the webcam. For the other sensors, a Get message is posted to the service's port. After retrieving the state of the sensor, we use the WinForms CCR adapter to safely communicate with the WinForm.

The full source for the methods that are called in UpdateSensorData() are listed below.

IEnumerator<ITask> UpdateLRF() 
{
    var sensorOrFault = _simulatedLRFServicePort.Get();
    yield return sensorOrFault.Choice();

    if(!HasError(sensorOrFault))
    {
        sicklrf.State sensorState = (sicklrf.State)sensorOrFault;
        WinFormsServicePort.Post(new FormInvoke(() =>
        {
            _imageProcessingForm.SetLRFData(sensorState.DistanceMeasurements, sensorState.Units);
        }));
    }
    yield break;
}
IEnumerator<ITask> UpdateSonar()
{
    var sensorOrFault = _simulatedSonarServicePort.Get();
    yield return sensorOrFault.Choice();

    if (!HasError(sensorOrFault))
    {
        sonar.SonarState sensorState = (sonar.SonarState)sensorOrFault;
        WinFormsServicePort.Post(new FormInvoke(() =>
        {
            _imageProcessingForm.SetSonarData(sensorState.DistanceMeasurements, sensorState.DistanceMeasurement, sensorState.MaxDistance);
        }));
    }
    yield break;
}
IEnumerator<ITask> UpdateColorSensor()
{
    var sensorOrFault = _simulatedColorSensorPort.Get();
    yield return sensorOrFault.Choice();

    if (!HasError(sensorOrFault))
    {
        colorsensor.ColorSensorState sensorState = (colorsensor.ColorSensorState)sensorOrFault;
        WinFormsServicePort.Post(new FormInvoke(() =>
            {
                _imageProcessingForm.SetColorReadingValue(sensorState.NormalizedAverageBlue,
                    sensorState.NormalizedAverageGreen, sensorState.NormalizedAverageBlue);
            }));
    }
}
IEnumerator<ITask> UpdateBrightnessSensor()
{
    var sensorOrFault = _simulatedBrightnessCellPort.Get();
    yield return sensorOrFault.Choice();

    if (!HasError(sensorOrFault))
    {
        analogsensor.AnalogSensorState sensorState = (analogsensor.AnalogSensorState)sensorOrFault;
        WinFormsServicePort.Post(new FormInvoke(() =>
        {
            _imageProcessingForm.SetBrightnessReadingValue(sensorState.NormalizedMeasurement);
        }));
    }
}
IEnumerator<ITask> UpdateCompass()
{
    var sensorOrFault = _simulatedCompassPort.Get();
    yield return sensorOrFault.Choice();

    if (!HasError(sensorOrFault))
    {
        analogsensor.AnalogSensorState sensorState = (analogsensor.AnalogSensorState)sensorOrFault;
        WinFormsServicePort.Post(new FormInvoke(() =>
        {
            _imageProcessingForm.SetCompassReadingValue(sensorState.RawMeasurement);
        }));
    }
}
IEnumerator<ITask> UpdateInfrared()
{
    var sensorOrFault = _simulatedIRServicePort.Get();
    yield return sensorOrFault.Choice();

    if (!HasError(sensorOrFault))
    {
        analogsensor.AnalogSensorState sensorState = (analogsensor.AnalogSensorState)sensorOrFault;
        WinFormsServicePort.Post(new FormInvoke(() =>
        {
            _imageProcessingForm.SetIRReadingValue(sensorState.NormalizedMeasurement);
        }));
    }
    yield break;
}
IEnumerator<ITask> UpdateWebCamImage()
{
    byte[] rgbData = null;
    Size size = new Size(0, 0);

    yield return Arbiter.Choice(_simulatedWebcamServicePort.QueryFrame(),
        success =>
        {
            rgbData = success.Frame;
            size = success.Size;
        },
        failure =>
        {
            LogError(failure.ToException());
        });

    if (rgbData != null)
    {
        ComputeGradient(ref rgbData, size);
        UpdateBitmap(rgbData, size);
    }
}

private void UpdateBitmap(byte[] rgbData, Size size)
{
    if (_imageProcessingForm == null)
        return;

    WinFormsServicePort.Post(new FormInvoke(() =>
        {
            Bitmap bmp = _imageProcessingForm.WebcamBitmap;
            CopyBytesToBitmap(rgbData, size.Width, size.Height, ref bmp);
            if (bmp != _imageProcessingForm.WebcamBitmap)
            {
                _imageProcessingForm.UpdateWebcamImage(bmp);
            }
            _imageProcessingForm.Invalidate(true);
        }));
}

private void ComputeGradient(ref byte[] rgbData, Size size)
{
    byte[] gradient = new byte[rgbData.Length];
    int[,] mask = new [,]
        {
            {+2, +1, 0},
            {+1, 0, -1},
            {0, -1, -2}
        };
    const int filterSize = 3;
    const int halfFilterSize = filterSize/2;

    //convolve use simple n^2 method, but this can easily be made 2n
    for (int y = halfFilterSize; y < size.Height - halfFilterSize; y++)
    {
        for (int x = halfFilterSize; x < size.Width - halfFilterSize; x++)
        {
            float result = 0;
            for (int yy = -halfFilterSize; yy <= halfFilterSize; ++yy)
            {
                int y0 = yy + y;
                for (int xx = -halfFilterSize; xx <= halfFilterSize; ++xx)
                {
                    int x0 = xx + x;
                    int k = mask[yy + halfFilterSize, xx + halfFilterSize];
                    int i = 3 * (y0 * size.Width + x0);
                    int r = rgbData[i];
                    int g = rgbData[i + 1];
                    int b = rgbData[i + 2];
                    result += k * (r + g + b) / (3.0f);
                }
            }
            result /= 4.0f; // normalize by max value 
            //the "result*5" makes edges more visible in the image, but is not really necessary
            //  (only nice for display purposes)
            byte byteResult = Clamp(Math.Abs(result*5.0f), 0.0f, 255.0f);
            int idx = 3 * (y * size.Width + x);
            gradient[idx] = byteResult;
            gradient[idx + 1] = byteResult;
            gradient[idx + 2] = byteResult;
        }
    }

    rgbData = gradient;
}

private byte Clamp(float x, float min, float max)
{
    return (byte)Math.Min(Math.Max(min, x), max);
}


/// <summary>
/// Updates a bitmap from a byte array
/// </summary>
/// <param name="srcData">Should be 32 or 24 bits per pixel (ARGB or RGB format)</param>
/// <param name="srcDataWidth">Width of the image srcData represents</param>
/// <param name="srcDataHeight">Height of the image srcData represents</param>
/// <param name="destBitmap">Bitmap to copy to. Will be recreated if necessary to copy to the array.</param>
static internal void CopyBytesToBitmap(byte[] srcData, int srcDataWidth, int srcDataHeight, ref Bitmap destBitmap)
{
    int bytesPerPixel = srcData.Length / (srcDataWidth * srcDataHeight);
    if (destBitmap == null
        || destBitmap.Width != srcDataWidth
        || destBitmap.Height != srcDataHeight
        || (destBitmap.PixelFormat == PixelFormat.Format32bppArgb &amp;&amp; bytesPerPixel == 3)
        || (destBitmap.PixelFormat == PixelFormat.Format32bppRgb &amp;&amp; bytesPerPixel == 3)
        || (destBitmap.PixelFormat == PixelFormat.Format24bppRgb &amp;&amp; bytesPerPixel == 4))
    {
        if (bytesPerPixel == 3)
            destBitmap = new Bitmap(srcDataWidth, srcDataHeight, PixelFormat.Format24bppRgb);
        else
            destBitmap = new Bitmap(srcDataWidth, srcDataHeight, PixelFormat.Format32bppRgb);
    }
    BitmapData bmpData = null;
    try
    {
        if (bytesPerPixel == 3)
            bmpData = destBitmap.LockBits(new Rectangle(0, 0, srcDataWidth, srcDataHeight), ImageLockMode.WriteOnly, PixelFormat.Format24bppRgb);
        else
            bmpData = destBitmap.LockBits(new Rectangle(0, 0, srcDataWidth, srcDataHeight), ImageLockMode.WriteOnly, PixelFormat.Format32bppRgb);

        Marshal.Copy(srcData, 0, bmpData.Scan0, srcData.Length);
        destBitmap.UnlockBits(bmpData);
    }
    catch (Exception)
    {
    }
}

To build this service, the below helper methods are also required.

class CompletionPort : Port<bool> { }
IEnumerator<ITask> PostOnTaskCompletionHelper(CompletionPort completionPort, IteratorHandler handler)
{
    yield return new IterativeTask(handler);
    completionPort.Post(true);
}
void PostOnTaskCompletion(CompletionPort completionPort, IteratorHandler handler)
{
    SpawnIterator<CompletionPort, IteratorHandler>(completionPort, handler, PostOnTaskCompletionHelper);
}

bool HasError<T>(PortSet<T, Fault> sensorOrFault)
{
    Fault fault = (Fault)sensorOrFault;
    if (fault != null)
    {
        LogError(fault.ToException());
        return true;
    }
    else
        return false;
}

You will also need to implement several methods of the _imageProcessingForm data member to build the service. Alternatively, you can replace method calls to _imageProcessingForm with calls to LogInfo() if you are unfamiliar with WinForms programming. A default implementation of the ImageProcessingResultForm class can be found in the corresponding tutorial's corresponding folder: (MRDS Install)\Samples\SimulationTutorials\Advanced\Multiple Simulated Sensors, where (MRDS Install) is the location you installed MRDS to.

Adding the new service to the manifest in DSSMe

After building the service, start DSSMe and open the manifest that was created when you saved your scene from the simulator. It should look similar to the below picture.

Multiple Simulated Sensors

Multiple Simulated Sensors - Manifest that was saved from the simulator

Now add the service generated from the DSS New Service wizard into DSSMe (by double clicking on the service or dragging it onto the list of services). In the following screenshots, it will be named "MultipleSimulatedSensors", but you may have chosen a different name for the service when generating it with the DSS New Service wizard. You should see a window similar to the below picture.

Multiple Simulated Sensors

Multiple Simulated Sensors - Adding the new service to the existing manifest

Note all the red exclamation points. DSSMe is informing you that the MultipleSimulatedSensors service requires partner services to run correctly. Drag and drop (or copy and paste) existing services into the blocks with the red exclamation points. After you are completed, DSSMe should look like the below picture.

Multiple Simulated Sensors

Multiple Simulated Sensors - Partnering with existing services

You can also add the simple dashboard service to drive the robot around. Now run the manifest ("Run" -> "Run Manifest..."). The scene should look similar to the below image.

Multiple Simulated Sensors

Multiple Simulated Sensors - Apartment scene with the WinForm displaying various sensor data

Summary

In this tutorial, you learned how to:

  • Creating the scene
  • Partnering with additional services
  • Reading and displaying data from the sensors
  • Adding the new service to the manifest in DSSMe

 

 

© 2012 Microsoft Corporation. All Rights Reserved.