NV SDK 10 (1) Clipmaps
Clipmaps sample:
Abstract
Clipmaps are a feature first implemented on SGI workstations that allow mapping
extremely high resolution textures to terrains. The original SGI implementation
required highly specialized, custom hardware. The advanced features of the
NVIDIA® GeForce® 8800 now permit the same algorithm using consumer
hardware.
Although current APIs and the GeForce 8800 directly support textures with
dimensions up to 8192, this size may be considered insufficient when we talk about
wide landscapes, say, in flight simulators. The idea of using a single texture for the
whole landscape can be very promising due to the fact that we can not only design
the whole landscape texture at once, but also parameterize it simply. Big textures
have "big" advantages compared to traditional methods of using several textures
with blending. This comes from the fact that they can be as complex as you wish.
Ones a designer has created a whole map it can be used as is.
Clipmaps take advantage of the fact that, due to perspective projection, only
relatively small regions within the texture mipmap pyramid are being accessed every
frame. Thus we have to manage these “hot” regions and update them in video
memory as the viewer moves around. A DX10 solution is to store such regions in a
texture array. Being able to index into it from the pixel shader allows for a
straightforward implementation of the clipmap algorithm in DX10.
How Clipmap Works
Clipmap can be defined as a partial representation of a mipmap pyramid which holds all
information needed for texturing at every single frame. How do you determine which data
from a source texture can potentially be used? The answer lies in a mipmap sample
selection strategy. The best case while texturing is one that allows you to use 1:1
mappings of texels to pixel area. That is how you can define clip size for mipmap
levels based on the current screen resolution. The lowest levels of the mipmap
pyramid will always fit in video memory and can be used statically. All other mip
levels form the clipmap stack which is dynamically updated to store actual data at
every frame (see Figure 1). The contents of a stack in most common cases can be
defined by its size and the viewer’s position.
The basic idea is to store the clipmap stack in a 2D texture array. Texture arrays are
a new feature of DX10. The remaining part of the mipmap pyramid is implemented
as a conventional 2D texture with mips. You can perform a dynamic stack update
using copy/update sub resource methods. It is totally clear that sometimes it would
not be possible to hold all the data needed in system memory. Therefore you are
going to need an additional mechanism to stream all necessary data efficiently from
disk.
A clipmap stack is stored in a 2D texture array. This array forms a dynamic part of
clipmap and should contain actual data for every mip level for each frame. Since
there are separate layers for each original mip level, you should create a texture
without mips. The remaining part of the image can be stored as a conventional 2D
texture.
Using the DX10 API, create these resources as follows (note that for a clipmap
stack texture, you should specify the number of layers using the ArraySize
element):
D3D10_TEXTURE2D_DESC texDesc;
ZeroMemory( &texDesc, sizeof(texDesc) );
texDesc.ArraySize = 1;
texDesc.Usage = D3D10_USAGE_DEFAULT;
texDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = g_PyramidTextureWidth;
texDesc.Height = g_PyramidTextureHeight;
texDesc.MipLevels = g_SourceImageMipsNum - g_StackDepth;
texDesc.SampleDesc.Count = 1;
pd3dDevice->CreateTexture2D(&texDesc, NULL, &g_pPyramidTexture);
texDesc.ArraySize = g_StackDepth;
texDesc.Width = g_ClipmapStackSize;
texDesc.Height = g_ClipmapStackSize;
texDesc.MipLevels = 1;
pd3dDevice->CreateTexture2D(&texDesc, NULL, &g_pStackTexture);
Clipmap Texture Addressing
All the work is done in the pixel shader. First you need to determine a mip level to
fetch from. For this use the ddx and ddy instructions to find the quad size in a
screen space.
float2 dx = ddx(input.texCoord * textureSize.x);
float2 dy = ddy(input.texCoord * textureSize.y);
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) ,
sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
Now you can easily calculate a suitable mip level as follows.
float mipLevel = log2( d );
Calculate the mipLevel as a float and use the fractional part to perform trilinear
filtering.
Clipmap texture addressing is rather simple; the only thing you need to do is to scale
the input texture coordinates based on the mip level. Calculate a scale factor by
dividing the source image size by the clipmap stack size.
float2 clipTexCoord = (input.texCoord) / pow(2, iMipLevel);
clipTexCoord.x *= scaleFactor.x + 0.5f;
clipTexCoord.y *= scaleFactor.y + 0.5f;
float4 color = StackTexture.Sample( stackSampler,
float3(clipTexCoord, iMipLevel) );
For the stack sampler, specify the address mode as wrap to implement toroidal
addressing.
Table 1.
Storage Efficiency*
Texture sizes 40962 81922 163842
Full mipmap 85.3 341.3 5461.3
1024 clipmap 13.3(16%) 17.3(5%) 25.3(<1%)
2048 clipmap 37.3(44%) 53.3(16%) 85.3(1.6%)
4096 clipmap 85.3(100%) 149.3(44%) 213.3(3.9%)
*Memory costs for 32-bit texels storage
Texture Filtering: Anisotropic / Trilinear
//----------------------------------------------------------------------------------
// File: Clipmaps.fx
// Author: Evgeny Makarov
// Email: sdkfeedback@nvidia.com
//
// Copyright (c) 2007 NVIDIA Corporation. All rights reserved.
//
// TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS SOFTWARE IS PROVIDED
// *AS IS* AND NVIDIA AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, EITHER EXPRESS
// OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL NVIDIA OR ITS SUPPLIERS
// BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
// WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS,
// BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS)
// ARISING OUT OF THE USE OF OR INABILITY TO USE THIS SOFTWARE, EVEN IF NVIDIA HAS
// BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
//
//
//----------------------------------------------------------------------------------
Texture2D PyramidTexture;
Texture2D PyramidTextureHM;
Texture2DArray StackTexture;
#define MAX_ANISOTROPY 16
#define MIP_LEVELS_MAX 7
SamplerState samplerLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerAnisotropic
{
Filter = ANISOTROPIC;
MaxAnisotropy = MAX_ANISOTROPY;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerStackLinear
{
Filter = MIN_MAG_LINEAR_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState RStateMSAA
{
MultisampleEnable = TRUE;
};
struct VSIn
{
uint index : SV_VertexID;
};
struct PSIn
{
float4 position : SV_Position;
float2 texCoord : TEXCOORD0;
float3 viewVectorTangent : TEXCOORD1;
float3 lightVectorTangent : TEXCOORD2;
};
struct PSInQuad
{
float4 position : SV_Position;
float3 texCoord : TEXCOORD0;
};
struct PSOut
{
float4 color : SV_Target;
};
struct PSOutQuad
{
float4 color : SV_Target;
};
cbuffer cb0
{
row_major float4x4 g_ModelViewProj;
float3 g_EyePosition;
float3 g_LightPosition;
float3 g_WorldRight;
float3 g_WorldUp;
};
cbuffer cb1
{
int2 g_TextureSize; // Source texture size
float2 g_StackCenter; // Stack center position defined by normalized texture coordinates
uint g_StackDepth; // Number of layers in a stack
float2 g_ScaleFactor; // SourceImageSize / ClipmapStackSize
float3 g_MipColors[MIP_LEVELS_MAX];
int g_SphereMeridianSlices;
int g_SphereParallelSlices;
float g_ScreenAspectRatio;
}
//--------------------------------------------------------------------------------------
// Calculate local normal using height values from Texture2D
//--------------------------------------------------------------------------------------
float3 GetLocalNormal(Texture2D _texture, SamplerState _sampler, float2 _coordinates)
{
float3 localNormal;
localNormal.x = _texture.Sample( _sampler, _coordinates, int2( 1, 0) ).x;
localNormal.x -= _texture.Sample( _sampler, _coordinates, int2(-1, 0) ).x;
localNormal.y = _texture.Sample( _sampler, _coordinates, int2( 0, 1) ).x;
localNormal.y -= _texture.Sample( _sampler, _coordinates, int2( 0, -1) ).x;
localNormal.z = sqrt( 1.0 - localNormal.x * localNormal.x - localNormal.y * localNormal.y );
return localNormal;
}
//--------------------------------------------------------------------------------------
// Calculate local normal using height values from Texture2DArray
//--------------------------------------------------------------------------------------
float3 GetLocalNormal_Array(Texture2DArray _texture, SamplerState _sampler, float3 _coordinates)
{
float3 localNormal;
localNormal.x = _texture.Sample( _sampler, _coordinates, int2( 1, 0) ).w;
localNormal.x -= _texture.Sample( _sampler, _coordinates, int2(-1, 0) ).w;
localNormal.y = _texture.Sample( _sampler, _coordinates, int2( 0, 1) ).w;
localNormal.y -= _texture.Sample( _sampler, _coordinates, int2( 0, -1) ).w;
localNormal.xy *= 5.0 / ( _coordinates.z + 1.0 ); // Scale the normal vector to add relief
localNormal.z = sqrt( 1.0 - localNormal.x * localNormal.x - localNormal.y * localNormal.y );
return localNormal;
}
//--------------------------------------------------------------------------------------
// Calculate a minimum stack level to fetch from
//--------------------------------------------------------------------------------------
float GetMinimumStackLevel(float2 coordinates)
{
float2 distance;
distance.x = abs( coordinates.x - g_StackCenter.x );
distance.x = min( distance.x, 1.0 - distance.x );
distance.y = abs( coordinates.y - g_StackCenter.y );
distance.y = min( distance.y, 1.0 - distance.y );
return max( log2( distance.x * g_ScaleFactor.x * 4.0 ), log2( distance.y * g_ScaleFactor.y * 4.0 ) );
}
//--------------------------------------------------------------------------------------
// Calculate vertex positions for procedural sphere mesh based on an input index buffer
//--------------------------------------------------------------------------------------
PSIn VSMain(VSIn input)
{
PSIn output;
float meridianPart = ( input.index % ( g_SphereMeridianSlices + 1 ) ) / float( g_SphereMeridianSlices );
float parallelPart = ( input.index / ( g_SphereMeridianSlices + 1 ) ) / float( g_SphereParallelSlices );
float angle1 = meridianPart * 3.14159265 * 2.0;
float angle2 = ( parallelPart - 0.5 ) * 3.14159265;
float cos_angle1 = cos( angle1 );
float sin_angle1 = sin( angle1 );
float cos_angle2 = cos( angle2 );
float sin_angle2 = sin( angle2 );
float3 VertexPosition;
VertexPosition.z = cos_angle1 * cos_angle2;
VertexPosition.x = sin_angle1 * cos_angle2;
VertexPosition.y = sin_angle2;
output.position = mul( float4( VertexPosition, 1.0 ), g_ModelViewProj );
output.texCoord = float2( 1.0 - meridianPart, 1.0 - parallelPart );
float3 tangent = float3( cos_angle1, 0.0, -sin_angle1 );
float3 binormal = float3( -sin_angle1 * sin_angle2, cos_angle2, -cos_angle1 * sin_angle2 );
float3 viewVector = normalize(g_EyePosition - VertexPosition);
output.viewVectorTangent.x = dot( viewVector, tangent );
output.viewVectorTangent.y = dot( viewVector, binormal);
output.viewVectorTangent.z = dot( viewVector, VertexPosition );
float3 lightVector = normalize( g_LightPosition );
output.lightVectorTangent.x = dot( lightVector, tangent );
output.lightVectorTangent.y = dot( lightVector, binormal);
output.lightVectorTangent.z = dot( lightVector, VertexPosition );
return output;
}
PSInQuad VSMainQuad(VSIn input)
{
PSInQuad output;
// We don't need to do any calculations here because everything
// is done in the geometry shader.
output.position = 0;
output.texCoord = 0;
return output;
}
[maxvertexcount(4)]
void GSMainQuad( point PSInQuad inputPoint[1], inout TriangleStream<PSInQuad> outputQuad, uint primitive : SV_PrimitiveID )
{
PSInQuad output;
output.position.z = 0.5;
output.position.w = 1.0;
output.texCoord.z = primitive;
float sizeY = 0.3;
float sizeX = sizeY * 1.2 / g_ScreenAspectRatio;
float offset = 0.7 - min( 1.2 / g_StackDepth, sizeY ) * primitive;
output.position.x = -0.9 - sizeX * 0.2;
output.position.y = offset;
output.texCoord.xy = float2( 0.0, 0.0 );
outputQuad.Append( output );
output.position.x = -0.9 + sizeX * 0.8;
output.position.y = offset + sizeY * 0.2;
output.texCoord.xy = float2( 1.0, 0.0 );
outputQuad.Append( output );
output.position.x = -0.9;
output.position.y = offset - sizeY - sizeY * 0.2;
output.texCoord.xy = float2( 0.0, 1.0 );
outputQuad.Append( output );
output.position.x = -0.9 + sizeX;
output.position.y = offset - sizeY;
output.texCoord.xy = float2( 1.0, 1.0 );
outputQuad.Append( output );
outputQuad.RestartStrip();
}
PSOut PS_Trilinear(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
// Calculate base mip level and fractional blending part for trilinear filtering.
float mipLevel = max( log2( d ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate(g_StackDepth - mipLevel);
float diffuse = saturate( input.lightVectorTangent.z );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerLinear, input.texCoord );
// Make early out for cases where we don't need to fetch from clipmap stack
if( blendGlobal == 0.0 )
{
output.color = color0 * diffuse;
}
else
{
// This fractional part defines the factor used for blending
// between two neighbour stack layers
float blendLayers = modf(mipLevel, mipLevel);
blendLayers = saturate(blendLayers);
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
// Here we need to perform proper scaling for input texture coordinates.
// For each layer we multiply input coordinates by g_ScaleFactor / pow( 2, layer ).
// We add 0.5 to result, because our stack center with coordinates (0.5, 0.5)
// starts from corner with coordinates (0, 0) of the original image.
float2 clipTexCoord = input.texCoord / pow( 2, mipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color1 = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, mipLevel ) );
clipTexCoord = input.texCoord / pow( 2, nextMipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color2 = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, nextMipLevel ) );
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal ) * diffuse;
}
return output;
}
PSOut PS_Trilinear_Parallax(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( d ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );
float2 viewVectorTangent = normalize( input.viewVectorTangent ).xy;
float2 scaledViewVector = viewVectorTangent / g_ScaleFactor;
float3 lightVector = normalize(input.lightVectorTangent);
float2 newCoordinates = input.texCoord - scaledViewVector * ( PyramidTextureHM.Sample( samplerLinear, input.texCoord ).x * 0.02 - 0.01 );
float3 normal = GetLocalNormal( PyramidTextureHM, samplerLinear, newCoordinates );
float diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerLinear, newCoordinates ) * diffuse;
if( blendGlobal == 0.0 )
{
output.color = color0;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
blendLayers = saturate( blendLayers );
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
float scale = pow( 2, mipLevel );
float2 clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
float height = StackTexture.Sample( samplerStackLinear, float3(clipTexCoord + 0.5, mipLevel) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale + 0.5;
float4 color1 = StackTexture.Sample( samplerStackLinear, float3( newCoordinates, mipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerStackLinear, float3( newCoordinates, mipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color1 *= diffuse;
scale = pow( 2, nextMipLevel );
clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
height = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, nextMipLevel ) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale + 0.5;
float4 color2 = StackTexture.Sample( samplerStackLinear, float3( newCoordinates, nextMipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerStackLinear, float3( newCoordinates, nextMipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color2 *= diffuse;
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal );
}
return output;
}
PSOut PS_Anisotropic(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float squaredLengthX = dot( dx.x, dx.x ) + dot( dx.y, dx.y );
float squaredLengthY = dot( dy.x, dy.x ) + dot( dy.y, dy.y );
float det = abs(dx.x * dy.y - dx.y * dy.x);
bool isMajorX = squaredLengthX > squaredLengthY;
float squaredLengthMajor = isMajorX ? squaredLengthX : squaredLengthY;
float lengthMajor = sqrt(squaredLengthMajor);
float normMajor = 1.0 / lengthMajor;
if( isMajorX )
squaredLengthMajor = squaredLengthX;
else
squaredLengthMajor = squaredLengthY;
lengthMajor = sqrt( squaredLengthMajor );
float ratioOfAnisotropy = squaredLengthMajor / det;
float lengthMinor = 0;
lengthMinor = ( ratioOfAnisotropy > MAX_ANISOTROPY ) ? lengthMajor / ratioOfAnisotropy : det / lengthMajor;
lengthMinor = max( lengthMinor, 1.0 );
// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( lengthMinor ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );
float diffuse = saturate( input.lightVectorTangent.z );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerAnisotropic, input.texCoord );
if( blendGlobal == 0.0 )
{
output.color = color0 * diffuse;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
float2 clipTexCoord = input.texCoord / pow( 2, mipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color1 = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord + 0.5, mipLevel ) );
clipTexCoord = input.texCoord / pow( 2, nextMipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color2 = StackTexture.Sample( samplerAnisotropic, float3(clipTexCoord + 0.5, nextMipLevel ) );
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal ) * diffuse;
}
return output;
}
PSOut PS_Anisotropic_Parallax(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float squaredLengthX = dot( dx.x, dx.x ) + dot( dx.y, dx.y );
float squaredLengthY = dot( dy.x, dy.x ) + dot( dy.y, dy.y );
float det = abs( dx.x * dy.y - dx.y * dy.x );
bool isMajorX = squaredLengthX > squaredLengthY;
float squaredLengthMajor = isMajorX ? squaredLengthX : squaredLengthY;
float lengthMajor = sqrt( squaredLengthMajor );
float normMajor = 1.0 / lengthMajor;
if( isMajorX )
squaredLengthMajor = squaredLengthX;
else
squaredLengthMajor = squaredLengthY;
lengthMajor = sqrt( squaredLengthMajor );
float ratioOfAnisotropy = squaredLengthMajor / det;
float lengthMinor = 0;
lengthMinor = ( ratioOfAnisotropy > MAX_ANISOTROPY ) ? lengthMajor / ratioOfAnisotropy : det / lengthMajor;
lengthMinor = max( lengthMinor, 1.0 );
// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( lengthMinor), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );
float2 viewVectorTangent = normalize( input.viewVectorTangent ).xy;
float2 scaledViewVector = viewVectorTangent / g_ScaleFactor;
float3 lightVector = normalize( input.lightVectorTangent );
float2 newCoordinates = input.texCoord - scaledViewVector * ( PyramidTextureHM.Sample( samplerLinear, input.texCoord ).x * 0.02 - 0.01 );
float3 normal = GetLocalNormal( PyramidTextureHM, samplerAnisotropic, newCoordinates );
float diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerAnisotropic, newCoordinates ) * diffuse;
if( blendGlobal == 0.0 )
{
output.color = color0;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
blendLayers = saturate( blendLayers );
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
float scale = pow( 2, mipLevel );
float2 clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
clipTexCoord += 0.5;
float height = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord, mipLevel ) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale;
float4 color1 = StackTexture.Sample( samplerAnisotropic, float3( newCoordinates, mipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerAnisotropic, float3( newCoordinates, mipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color1 *= diffuse;
scale = pow( 2, nextMipLevel );
clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
clipTexCoord += 0.5f;
height = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord, nextMipLevel ) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale;
float4 color2 = StackTexture.Sample( samplerAnisotropic, float3( newCoordinates, nextMipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerAnisotropic, float3( newCoordinates, nextMipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color2 *= diffuse;
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal );
}
return output;
}
//--------------------------------------------------------------------------------------
// Calculate color values to show
//--------------------------------------------------------------------------------------
PSOut PS_Color(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
// Calculate base mip level and fractional blending part.
float mipLevel = log2( d );
float blendLayers = modf( mipLevel, mipLevel );
int mipBoundary = min( g_StackDepth, MIP_LEVELS_MAX ) - 1;
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, mipBoundary );
mipLevel = clamp( mipLevel, 0, mipBoundary );
output.color.xyz = lerp( g_MipColors[mipLevel], g_MipColors[nextMipLevel], blendLayers );
return output;
}
PSOut PSQuad(PSInQuad input)
{
PSOut output;
float width = 0.995 - input.texCoord.x;
width = saturate( width * 50.0 );
output.color.xyz = lerp( g_MipColors[input.texCoord.z], StackTexture.Sample( samplerStackLinear, input.texCoord ), width);
output.color.w = 1.0;
return output;
}
//--------------------------------------------------------------------------------------
// Compiled shaders used in different techniques
//--------------------------------------------------------------------------------------
VertexShader vsCompiled = CompileShader( vs_4_0, VSMain() );
VertexShader vsCompiledQuad = CompileShader( vs_4_0, VSMainQuad() );
GeometryShader gsCompiledQuad = CompileShader( gs_4_0, GSMainQuad() );
PixelShader ps_Trilinear = CompileShader( ps_4_0, PS_Trilinear() );
PixelShader ps_Trilinear_Parallax = CompileShader( ps_4_0, PS_Trilinear_Parallax() );
PixelShader ps_Anisotropic = CompileShader( ps_4_0, PS_Anisotropic() );
PixelShader ps_Anisotropic_Parallax = CompileShader( ps_4_0, PS_Anisotropic_Parallax() );
PixelShader ps_Color = CompileShader( ps_4_0, PS_Color() );
PixelShader psComiledQuad = CompileShader( ps_4_0, PSQuad() );
technique10 Trilinear
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Trilinear );
SetRasterizerState(RStateMSAA);
}
pass p1
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Trilinear_Parallax );
SetRasterizerState(RStateMSAA);
}
}
technique10 Anisotropic
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Anisotropic );
SetRasterizerState(RStateMSAA);
}
pass p1
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Anisotropic_Parallax );
SetRasterizerState(RStateMSAA);
}
}
technique10 ColoredMips
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Color );
SetRasterizerState(RStateMSAA);
}
}
technique10 StackDrawPass
{
pass p0
{
SetVertexShader( vsCompiledQuad );
SetGeometryShader( gsCompiledQuad );
SetPixelShader( psComiledQuad );
SetRasterizerState(RStateMSAA);
}
}
//----------------------------------------------------------------------------------
// File: JPEG_Preprocessor.fx
// Author: Evgeny Makarov
// Email: sdkfeedback@nvidia.com
//
// Copyright (c) 2007 NVIDIA Corporation. All rights reserved.
//
// TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS SOFTWARE IS PROVIDED
// *AS IS* AND NVIDIA AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, EITHER EXPRESS
// OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL NVIDIA OR ITS SUPPLIERS
// BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
// WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS,
// BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS)
// ARISING OUT OF THE USE OF OR INABILITY TO USE THIS SOFTWARE, EVEN IF NVIDIA HAS
// BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
//
//
//----------------------------------------------------------------------------------
Texture2D TextureDCT;
Texture2D<uint1> QuantTexture;
Texture2D RowTexture1;
Texture2D RowTexture2;
Texture2D ColumnTexture1;
Texture2D ColumnTexture2;
Texture2D TargetTexture;
Texture2D TextureY;
Texture2D TextureCb;
Texture2D TextureCr;
Texture2D TextureHeight;
SamplerState samplerPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
BlendState NoBlending
{
BlendEnable[0] = FALSE;
};
struct VSIn
{
uint index : SV_VertexID;
};
struct PSIn
{
float4 position : SV_Position;
float3 texCoord : TEXCOORD0;
};
struct PSOut
{
float4 color : SV_Target;
};
struct PSOutMRT
{
float4 color0 : SV_Target0;
float4 color1 : SV_Target1;
};
cbuffer cb0
{
float g_RowScale;
float g_ColScale;
};
PSIn VS_Quad(VSIn input)
{
PSIn output;
output.position = 0;
output.texCoord = 0;
return output;
}
[maxvertexcount(4)]
void GS_Quad( point PSIn inputPoint[1], inout TriangleStream<PSIn> outputQuad, uint primitive : SV_PrimitiveID )
{
PSIn output;
output.position.z = 0.5;
output.position.w = 1.0;
output.texCoord.z = primitive;
output.position.x = -1.0;
output.position.y = 1.0;
output.texCoord.xy = float2( 0.0, 0.0 );
outputQuad.Append( output );
output.position.x = 1.0;
output.position.y = 1.0;
output.texCoord.xy = float2( 1.0, 0.0 );
outputQuad.Append( output );
output.position.x = -1.0;
output.position.y = -1.0;
output.texCoord.xy = float2( 0.0, 1.0 );
outputQuad.Append( output );
output.position.x = 1.0;
output.position.y = -1.0;
output.texCoord.xy = float2( 1.0, 1.0 );
outputQuad.Append( output );
outputQuad.RestartStrip();
}
/////////////////////////////////////////////////////////////////////////////
// JPEG Decompression
// IDCT based on Independent JPEG Group code
/////////////////////////////////////////////////////////////////////////////
PSOutMRT PS_IDCT_Rows( PSIn input )
{
PSOutMRT output;
float d[8];
// Read row elements
d[0] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -4, 0 ) );
d[1] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -3, 0 ) );
d[2] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -2, 0 ) );
d[3] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -1, 0 ) );
d[4] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 0, 0 ) );
d[5] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 1, 0 ) );
d[6] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 2, 0 ) );
d[7] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 3, 0 ) );
// Perform dequantization
d[0] *= QuantTexture.Sample( samplerPoint, float2( 0.0625, input.texCoord.y * g_ColScale ) );
d[1] *= QuantTexture.Sample( samplerPoint, float2( 0.1875, input.texCoord.y * g_ColScale ) );
d[2] *= QuantTexture.Sample( samplerPoint, float2( 0.3125, input.texCoord.y * g_ColScale ) );
d[3] *= QuantTexture.Sample( samplerPoint, float2( 0.4375, input.texCoord.y * g_ColScale ) );
d[4] *= QuantTexture.Sample( samplerPoint, float2( 0.5625, input.texCoord.y * g_ColScale ) );
d[5] *= QuantTexture.Sample( samplerPoint, float2( 0.6875, input.texCoord.y * g_ColScale ) );
d[6] *= QuantTexture.Sample( samplerPoint, float2( 0.8125, input.texCoord.y * g_ColScale ) );
d[7] *= QuantTexture.Sample( samplerPoint, float2( 0.9375, input.texCoord.y * g_ColScale ) );
float tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
float tmp10, tmp11, tmp12, tmp13;
float z5, z10, z11, z12, z13;
tmp0 = d[0];
tmp1 = d[2];
tmp2 = d[4];
tmp3 = d[6];
tmp10 = tmp0 + tmp2;
tmp11 = tmp0 - tmp2;
tmp13 = tmp1 + tmp3;
tmp12 = (tmp1 - tmp3) * 1.414213562 - tmp13;
tmp0 = tmp10 + tmp13;
tmp3 = tmp10 - tmp13;
tmp1 = tmp11 + tmp12;
tmp2 = tmp11 - tmp12;
tmp4 = d[1];
tmp5 = d[3];
tmp6 = d[5];
tmp7 = d[7];
z13 = tmp6 + tmp5;
z10 = tmp6 - tmp5;
z11 = tmp4 + tmp7;
z12 = tmp4 - tmp7;
tmp7 = z11 + z13;
tmp11 = (z11 - z13) * 1.414213562;
z5 = (z10 + z12) * 1.847759065;
tmp10 = 1.082392200 * z12 - z5;
tmp12 = -2.613125930 * z10 + z5;
tmp6 = tmp12 - tmp7;
tmp5 = tmp11 - tmp6;
tmp4 = tmp10 + tmp5;
output.color0.x = tmp0 + tmp7;
output.color1.w = tmp0 - tmp7;
output.color0.y = tmp1 + tmp6;
output.color1.z = tmp1 - tmp6;
output.color0.z = tmp2 + tmp5;
output.color1.y = tmp2 - tmp5;
output.color1.x = tmp3 + tmp4;
output.color0.w = tmp3 - tmp4;
return output;
}
PSOut PS_IDCT_Unpack_Rows( PSIn input )
{
PSOut output;
// Get eight values storded in 2 textures
float4 values1 = RowTexture1.Sample( samplerPoint, input.texCoord );
float4 values2 = RowTexture2.Sample( samplerPoint, input.texCoord );
// Calculate a single non-zero index to define an element to be used
int index = frac( input.texCoord.x * g_RowScale ) * 8.0;
float4 indexMask1 = ( index == float4( 0, 1, 2, 3 ) );
float4 indexMask2 = ( index == float4( 4, 5, 6, 7 ) );
output.color = dot( values1, indexMask1 ) + dot( values2, indexMask2 );
return output;
}
PSOutMRT PS_IDCT_Columns( PSIn input )
{
PSOutMRT output;
float d[8];
// Read column elements
d[0] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -4 ) );
d[1] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -3 ) );
d[2] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -2 ) );
d[3] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -1 ) );
d[4] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 0 ) );
d[5] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 1 ) );
d[6] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 2 ) );
d[7] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 3 ) );
float tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
float tmp10, tmp11, tmp12, tmp13;
float z5, z10, z11, z12, z13;
tmp0 = d[0];
tmp1 = d[2];
tmp2 = d[4];
tmp3 = d[6];
tmp10 = tmp0 + tmp2;
tmp11 = tmp0 - tmp2;
tmp13 = tmp1 + tmp3;
tmp12 = (tmp1 - tmp3) * 1.414213562 - tmp13;
tmp0 = tmp10 + tmp13;
tmp3 = tmp10 - tmp13;
tmp1 = tmp11 + tmp12;
tmp2 = tmp11 - tmp12;
tmp4 = d[1];
tmp5 = d[3];
tmp6 = d[5];
tmp7 = d[7];
z13 = tmp6 + tmp5;
z10 = tmp6 - tmp5;
z11 = tmp4 + tmp7;
z12 = tmp4 - tmp7;
tmp7 = z11 + z13;
tmp11 = (z11 - z13) * 1.414213562;
z5 = (z10 + z12) * 1.847759065;
tmp10 = 1.082392200 * z12 - z5;
tmp12 = -2.613125930 * z10 + z5;
tmp6 = tmp12 - tmp7;
tmp5 = tmp11 - tmp6;
tmp4 = tmp10 + tmp5;
output.color0.x = tmp0 + tmp7;
output.color1.w = tmp0 - tmp7;
output.color0.y = tmp1 + tmp6;
output.color1.z = tmp1 - tmp6;
output.color0.z = tmp2 + tmp5;
output.color1.y = tmp2 - tmp5;
output.color1.x = tmp3 + tmp4;
output.color0.w = tmp3 - tmp4;
return output;
}
PSOut PS_IDCT_Unpack_Columns( PSIn input )
{
PSOut output;
// Get eight values storded in 2 textures
float4 values1 = ColumnTexture1.Sample( samplerPoint, input.texCoord );
float4 values2 = ColumnTexture2.Sample( samplerPoint, input.texCoord );
// Calculate a single non-zero index to define an element to be used
int index = frac( input.texCoord.y * g_ColScale ) * 8.0;
float4 indexMask1 = ( index == float4( 0, 1, 2, 3 ) );
float4 indexMask2 = ( index == float4( 4, 5, 6, 7 ) );
output.color = clamp( ( dot( values1, indexMask1 ) + dot( values2, indexMask2 ) ) * 0.125 + 128.0, 0.0, 256.0 );
return output;
}
PSOut PS_IDCT_RenderToBuffer( PSIn input )
{
PSOut output;
float Y = TextureY.Sample( samplerPoint, input.texCoord );
float Cb = TextureCb.Sample( samplerPoint, input.texCoord );
float Cr = TextureCr.Sample( samplerPoint, input.texCoord );
// Convert YCbCr -> RGB
output.color.x = Y + 1.402 * ( Cr - 128.0 );
output.color.y = Y - 0.34414 * ( Cb - 128.0 ) - 0.71414 * ( Cr - 128.0 );
output.color.z = Y + 1.772 * ( Cb - 128.0 );
output.color.w = TextureHeight.Sample( samplerPoint, input.texCoord );
output.color.xyzw *= ( 1.0 / 256.0 );
return output;
}
//--------------------------------------------------------------------------------------
// Compiled shaders used in different techniques
//--------------------------------------------------------------------------------------
VertexShader VS_Quad_Compiled = CompileShader( vs_4_0, VS_Quad() );
GeometryShader GS_Quad_Compiled = CompileShader( gs_4_0, GS_Quad() );
technique10 JPEG_Decompression
{
pass p0
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Rows() ) );
}
pass p1
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Unpack_Rows() ) );
}
pass p2
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Columns() ) );
}
pass p3
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Unpack_Columns() ) );
}
pass p4
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_RenderToBuffer() ) );
}
}
NV SDK 10 (1) Clipmaps的更多相关文章
- 【Android类型SDK测试(二)】环境基础
(一)语言 Android使用的Java语言,所以要测试Android类型的SDK,Java的基础知识还是需要的. 另外,Android中有NDK类型的编程,需要知道C相关的知识. (二)环境准备 A ...
- 【Android类型SDK测试(一)】认识Android类型的 SDK
(一)SDK是个什么东东 接触软件相关行业的同学都应该知道,SDK(即 Software Development Kit),软件开发包.其作用就是为开发某些软件提供一些便利的东西,包括工具 集合,文档 ...
- NOIP2017提高组模拟赛 10 (总结)
NOIP2017提高组模拟赛 10 (总结) 第一题 机密信息 FJ有个很奇怪的习惯,他把他所有的机密信息都存放在一个叫机密盘的磁盘分区里,然而这个机密盘中却没有一个文件,那他是怎么存放信息呢?聪明的 ...
- php面试题10(复习)
php面试题10(复习) 一.总结 复习 二.php面试题10 21.谈谈 asp,php,jsp 的优缺点(1 分)(asp要钱,jsp学习成本大)答:ASP 全名 Active Server Pa ...
- 新浪微博SDK开发(1):总述
花了几天时间,消耗了九牛六虎之力,新浪微博大部分API已经封装,但有部分API实在太难封装. 说起这封装,我必须严重地.从人品和技术层面鄙视一下新浪的程序员,实在太菜了.估计菜鸟都被大企业吸收了,菜到 ...
- 第一章 工欲善其事 必先利其器—Android SDK工具(3)
1.3没有真机一样开发--Android模拟器 有些时候,我们手头上可能并没有符合要求的Android设备.那么这时候我们是不是对调试或者开发就一筹莫展了呢?当然不是.由于我们有Android模拟器. ...
- 第一章 工欲善其事 其利润—Android SDK工具(2)
1.2设备管理工具-调试桥(ADB) 1.2.1ADB简单介绍 ADB全称是Android Debug Bridge,是Android SDK里自带的一个工具,用这个工具能够直接操作管理Android ...
- 基于C#的钉钉SDK开发(1)--对官方SDK的重构优化
在前段时间,接触一个很喜欢钉钉并且已在内部场景广泛使用钉钉进行工厂内部管理的客户,如钉钉考勤.日常审批.钉钉投影.钉钉门禁等等方面,才体会到原来钉钉已经已经在企业上可以用的很广泛的,因此回过头来学习研 ...
- 【转】VC++10(VS2010)IDE各种使用技巧
原文网址:http://www.cnblogs.com/sunrisezhang/articles/2802397.html 一个好的coder,他首先必须是一个熟练工.对于C++程序员来说,只有掌握 ...
随机推荐
- UVA2639
#include<iostream> using namespace std; int num[105]; int ans[105]; void init() { int temp=2; ...
- (备忘)自定义viewgroup与点击分发事件
public class ScoreButton extends ViewGroup 在类中重写onTouchEvent方法 @Override public boolean onTouchEvent ...
- runtime的黑魔法
要说明runtime,首先要明白objc这门语言,是基于C的封装.真是因为runtime,objc才有了面对对象的特性. 也就说,所有objc的语言,在运行时都会转换成C. 也是基于这样的特性,run ...
- (转)Oracle 获取上周一到周末日期的查询sql语句
-- Oracle 取上周一到周末的sql -- 这样取的是 在一周内第几天,是以周日为开始的 select to_char(to_date('20130906','yyyymmdd'),'d') f ...
- python初学杂记
python常用命令: 1.python 或者 python3 打开交互式python解释器 2.python hello.py 通过命令提示符运行python脚本 交互式python解释器常用 ...
- UWP 下拉刷新控件(PullToRefreshControl)
最近项目里面有下拉刷新的需求,自己做了一个,效果还不错. <Style TargetType="local:PullToRefreshControl"> <Set ...
- js 数据类型问题
1. alert(type of 变量名) console.log(type of 变量名); 可以答应数据类型 2.var cost_price=parseFloat(parseFloat($(&q ...
- MIT 6.828 JOS学习笔记7. Lab 1 Part 2.2: The Boot Loader
Lab 1 Part 2 The Boot Loader Loading the Kernel 我们现在可以进一步的讨论一下boot loader中的C语言的部分,即boot/main.c.但是在我们 ...
- Subsonic的使用之基本语法、操作(2)
查询 SubSonic2.1版本 – 例出3种查询. Product product = new Select().From<Product>() .Where(Product.Produ ...
- .NET Oracle Developer的福音——ODP.NET Managed正式推出
在.NET平台下开发Oracle应用的小伙伴们肯定都知道一方面做Oracle开发和实施相比SqlServer要安装Oracle客户端(XCopy.自己提取相关文件也有一定复杂性),另一方面相比JAVA ...