立体渲染
如果你不熟悉卷呈现,建议阅读概述。
呈现 3D 纹理
在 CPU 上:
public struct Int3 { public int X, Y, Z; /* ... */ }
public class VolumeHeader {
public readonly Int3 Size;
public VolumeHeader(Int3 size) { this.Size = size; }
public int CubicToLinearIndex(Int3 index) {
return index.X + (index.Y * (Size.X)) + (index.Z * (Size.X * Size.Y));
}
public Int3 LinearToCubicIndex(int linearIndex)
{
return new Int3((linearIndex / 1) % Size.X,
(linearIndex / Size.X) % Size.Y,
(linearIndex / (Size.X * Size.Y)) % Size.Z);
}
/* ... */
}
public class VolumeBuffer<T> {
public readonly VolumeHeader Header;
public readonly T[] DataArray;
public T GetVoxel(Int3 pos) {
return this.DataArray[this.Header.CubicToLinearIndex(pos)];
}
public void SetVoxel(Int3 pos, T val) {
this.DataArray[this.Header.CubicToLinearIndex(pos)] = val;
}
public T this[Int3 pos] {
get { return this.GetVoxel(pos); }
set { this.SetVoxel(pos, value); }
}
/* ... */
}
在 GPU 上:
float3 _VolBufferSize;
int3 UnitVolumeToIntVolume(float3 coord) {
return (int3)( coord * _VolBufferSize.xyz );
}
int IntVolumeToLinearIndex(int3 coord, int3 size) {
return coord.x + ( coord.y * size.x ) + ( coord.z * ( size.x * size.y ) );
}
uniform StructuredBuffer<float> _VolBuffer;
float SampleVol(float3 coord3 ) {
int3 intIndex3 = UnitVolumeToIntVolume( coord3 );
int index1D = IntVolumeToLinearIndex( intIndex3, _VolBufferSize.xyz);
return __VolBuffer[index1D];
}
着色和渐变
如何为立体(例如 MRI)着色以实现有用的可视化。 主要方法是将你要显示亮度的“亮度窗口”(最小值和最大值),并缩放到该范围内以查看黑白亮度。 然后,可以将“颜色坡度”应用于该范围内的值,并将其存储为纹理,使得亮度光谱的不同部分可以涂上不同的颜色:
float4 ShadeVol( float intensity ) {
float unitIntensity = saturate( intensity - IntensityMin / ( IntensityMax - IntensityMin ) );
// Simple two point black and white intensity:
color.rgba = unitIntensity;
// Color ramp method:
color.rgba = tex2d( ColorRampTexture, float2( unitIntensity, 0 ) );
在许多应用程序中,我们将在立体中存储原始亮度值和“分段索引”(以便对皮肤和骨骼等不同部分进行分段;这些分段由专家在专门工具中创建)。 这可以与上述方法结合使用,为每个段索引涂上不同的颜色,甚至不同的颜色坡度:
// Change color to match segment index (fade each segment towards black):
color.rgb = SegmentColors[ segment_index ] * color.a; // brighter alpha gives brighter color
着色器中的立体切片
第一步是创建一个可在立体中移动的“切片平面”,“切分它”以及每个点的扫描值。 假设有一个“VolumeSpace”立方体,表示该立体在现实世界中的位置,可用作放置点的参考:
// In the vertex shader:
float4 worldPos = mul(_Object2World, float4(input.vertex.xyz, 1));
float4 volSpace = mul(_WorldToVolume, float4(worldPos, 1));
// In the pixel shader:
float4 color = ShadeVol( SampleVol( volSpace ) );
着色器中的立体跟踪
如何使用 GPU 进行子立体跟踪(深入到几个体素,然后在数据上从后向前层叠):
float4 AlphaBlend(float4 dst, float4 src) {
float4 res = (src * src.a) + (dst - dst * src.a);
res.a = src.a + (dst.a - dst.a*src.a);
return res;
}
float4 volTraceSubVolume(float3 objPosStart, float3 cameraPosVolSpace) {
float maxDepth = 0.15; // depth in volume space, customize!!!
float numLoops = 10; // can be 400 on nice PC
float4 curColor = float4(0, 0, 0, 0);
// Figure out front and back volume coords to walk through:
float3 frontCoord = objPosStart;
float3 backCoord = frontPos + (normalize(cameraPosVolSpace - objPosStart) * maxDepth);
float3 stepCoord = (frontCoord - backCoord) / numLoops;
float3 curCoord = backCoord;
// Add per-pixel random offset, avoids layer aliasing:
curCoord += stepCoord * RandomFromPositionFast(objPosStart);
// Walk from back to front (to make front appear in-front of back):
for (float i = 0; i < numLoops; i++) {
float intensity = SampleVol(curCoord);
float4 shaded = ShadeVol(intensity);
curColor = AlphaBlend(curColor, shaded);
curCoord += stepCoord;
}
return curColor;
}
// In the vertex shader:
float4 worldPos = mul(_Object2World, float4(input.vertex.xyz, 1));
float4 volSpace = mul(_WorldToVolume, float4(worldPos.xyz, 1));
float4 cameraInVolSpace = mul(_WorldToVolume, float4(_WorldSpaceCameraPos.xyz, 1));
// In the pixel shader:
float4 color = volTraceSubVolume( volSpace, cameraInVolSpace );
完整立体渲染
修改上述子立体代码后,我们将获得:
float4 volTraceSubVolume(float3 objPosStart, float3 cameraPosVolSpace) {
float maxDepth = 1.73; // sqrt(3), max distance from point on cube to any other point on cube
int maxSamples = 400; // just in case, keep this value within bounds
// not shown: trim front and back positions to both be within the cube
int distanceInVoxels = length(UnitVolumeToIntVolume(frontPos - backPos)); // measure distance in voxels
int numLoops = min( distanceInVoxels, maxSamples ); // put a min on the voxels to sample
混合分辨率场景渲染
如何以低分辨率渲染场景的一部分并将其放回原位:
- 设置两个屏幕外相机,分别用于跟踪每只眼睛,它们将更新每一帧
- 设置两个低分辨率渲染目标(各为 200x200),这些相机以此分辨率渲染内容
- 设置在用户面前移动的四联体
每一帧:
- 以低分辨率绘制每只眼睛的渲染目标(立体数据、高开销着色器等)
- 以全分辨率正常绘制场景(网格、UI 等)
- 在用户面前、场景上层绘制四联体,并将低分辨率渲染投影到其上
- 结果:具有低分辨率、高密度立体数据的全分辨率元素的视觉组合