NV SDK 10 (1) Clipmaps
Clipmaps sample:
Abstract
Clipmaps are a feature first implemented on SGI workstations that allow mapping
extremely high resolution textures to terrains. The original SGI implementation
required highly specialized, custom hardware. The advanced features of the
NVIDIA® GeForce® 8800 now permit the same algorithm using consumer
hardware.
Although current APIs and the GeForce 8800 directly support textures with
dimensions up to 8192, this size may be considered insufficient when we talk about
wide landscapes, say, in flight simulators. The idea of using a single texture for the
whole landscape can be very promising due to the fact that we can not only design
the whole landscape texture at once, but also parameterize it simply. Big textures
have "big" advantages compared to traditional methods of using several textures
with blending. This comes from the fact that they can be as complex as you wish.
Ones a designer has created a whole map it can be used as is.
Clipmaps take advantage of the fact that, due to perspective projection, only
relatively small regions within the texture mipmap pyramid are being accessed every
frame. Thus we have to manage these “hot” regions and update them in video
memory as the viewer moves around. A DX10 solution is to store such regions in a
texture array. Being able to index into it from the pixel shader allows for a
straightforward implementation of the clipmap algorithm in DX10.
How Clipmap Works
Clipmap can be defined as a partial representation of a mipmap pyramid which holds all
information needed for texturing at every single frame. How do you determine which data
from a source texture can potentially be used? The answer lies in a mipmap sample
selection strategy. The best case while texturing is one that allows you to use 1:1
mappings of texels to pixel area. That is how you can define clip size for mipmap
levels based on the current screen resolution. The lowest levels of the mipmap
pyramid will always fit in video memory and can be used statically. All other mip
levels form the clipmap stack which is dynamically updated to store actual data at
every frame (see Figure 1). The contents of a stack in most common cases can be
defined by its size and the viewer’s position.
The basic idea is to store the clipmap stack in a 2D texture array. Texture arrays are
a new feature of DX10. The remaining part of the mipmap pyramid is implemented
as a conventional 2D texture with mips. You can perform a dynamic stack update
using copy/update sub resource methods. It is totally clear that sometimes it would
not be possible to hold all the data needed in system memory. Therefore you are
going to need an additional mechanism to stream all necessary data efficiently from
disk.
A clipmap stack is stored in a 2D texture array. This array forms a dynamic part of
clipmap and should contain actual data for every mip level for each frame. Since
there are separate layers for each original mip level, you should create a texture
without mips. The remaining part of the image can be stored as a conventional 2D
texture.
Using the DX10 API, create these resources as follows (note that for a clipmap
stack texture, you should specify the number of layers using the ArraySize
element):
D3D10_TEXTURE2D_DESC texDesc;
ZeroMemory( &texDesc, sizeof(texDesc) );
texDesc.ArraySize = 1;
texDesc.Usage = D3D10_USAGE_DEFAULT;
texDesc.BindFlags = D3D10_BIND_SHADER_RESOURCE;
texDesc.Format = DXGI_FORMAT_R8G8B8A8_UNORM;
texDesc.Width = g_PyramidTextureWidth;
texDesc.Height = g_PyramidTextureHeight;
texDesc.MipLevels = g_SourceImageMipsNum - g_StackDepth;
texDesc.SampleDesc.Count = 1;
pd3dDevice->CreateTexture2D(&texDesc, NULL, &g_pPyramidTexture);
texDesc.ArraySize = g_StackDepth;
texDesc.Width = g_ClipmapStackSize;
texDesc.Height = g_ClipmapStackSize;
texDesc.MipLevels = 1;
pd3dDevice->CreateTexture2D(&texDesc, NULL, &g_pStackTexture);
Clipmap Texture Addressing
All the work is done in the pixel shader. First you need to determine a mip level to
fetch from. For this use the ddx and ddy instructions to find the quad size in a
screen space.
float2 dx = ddx(input.texCoord * textureSize.x);
float2 dy = ddy(input.texCoord * textureSize.y);
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) ,
sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
Now you can easily calculate a suitable mip level as follows.
float mipLevel = log2( d );
Calculate the mipLevel as a float and use the fractional part to perform trilinear
filtering.
Clipmap texture addressing is rather simple; the only thing you need to do is to scale
the input texture coordinates based on the mip level. Calculate a scale factor by
dividing the source image size by the clipmap stack size.
float2 clipTexCoord = (input.texCoord) / pow(2, iMipLevel);
clipTexCoord.x *= scaleFactor.x + 0.5f;
clipTexCoord.y *= scaleFactor.y + 0.5f;
float4 color = StackTexture.Sample( stackSampler,
float3(clipTexCoord, iMipLevel) );
For the stack sampler, specify the address mode as wrap to implement toroidal
addressing.
Table 1.
Storage Efficiency*
Texture sizes 40962 81922 163842
Full mipmap 85.3 341.3 5461.3
1024 clipmap 13.3(16%) 17.3(5%) 25.3(<1%)
2048 clipmap 37.3(44%) 53.3(16%) 85.3(1.6%)
4096 clipmap 85.3(100%) 149.3(44%) 213.3(3.9%)
*Memory costs for 32-bit texels storage
Texture Filtering: Anisotropic / Trilinear
//----------------------------------------------------------------------------------
// File: Clipmaps.fx
// Author: Evgeny Makarov
// Email: sdkfeedback@nvidia.com
//
// Copyright (c) 2007 NVIDIA Corporation. All rights reserved.
//
// TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS SOFTWARE IS PROVIDED
// *AS IS* AND NVIDIA AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, EITHER EXPRESS
// OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL NVIDIA OR ITS SUPPLIERS
// BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
// WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS,
// BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS)
// ARISING OUT OF THE USE OF OR INABILITY TO USE THIS SOFTWARE, EVEN IF NVIDIA HAS
// BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
//
//
//----------------------------------------------------------------------------------
Texture2D PyramidTexture;
Texture2D PyramidTextureHM;
Texture2DArray StackTexture;
#define MAX_ANISOTROPY 16
#define MIP_LEVELS_MAX 7
SamplerState samplerLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerAnisotropic
{
Filter = ANISOTROPIC;
MaxAnisotropy = MAX_ANISOTROPY;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerStackLinear
{
Filter = MIN_MAG_LINEAR_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
RasterizerState RStateMSAA
{
MultisampleEnable = TRUE;
};
struct VSIn
{
uint index : SV_VertexID;
};
struct PSIn
{
float4 position : SV_Position;
float2 texCoord : TEXCOORD0;
float3 viewVectorTangent : TEXCOORD1;
float3 lightVectorTangent : TEXCOORD2;
};
struct PSInQuad
{
float4 position : SV_Position;
float3 texCoord : TEXCOORD0;
};
struct PSOut
{
float4 color : SV_Target;
};
struct PSOutQuad
{
float4 color : SV_Target;
};
cbuffer cb0
{
row_major float4x4 g_ModelViewProj;
float3 g_EyePosition;
float3 g_LightPosition;
float3 g_WorldRight;
float3 g_WorldUp;
};
cbuffer cb1
{
int2 g_TextureSize; // Source texture size
float2 g_StackCenter; // Stack center position defined by normalized texture coordinates
uint g_StackDepth; // Number of layers in a stack
float2 g_ScaleFactor; // SourceImageSize / ClipmapStackSize
float3 g_MipColors[MIP_LEVELS_MAX];
int g_SphereMeridianSlices;
int g_SphereParallelSlices;
float g_ScreenAspectRatio;
}
//--------------------------------------------------------------------------------------
// Calculate local normal using height values from Texture2D
//--------------------------------------------------------------------------------------
float3 GetLocalNormal(Texture2D _texture, SamplerState _sampler, float2 _coordinates)
{
float3 localNormal;
localNormal.x = _texture.Sample( _sampler, _coordinates, int2( 1, 0) ).x;
localNormal.x -= _texture.Sample( _sampler, _coordinates, int2(-1, 0) ).x;
localNormal.y = _texture.Sample( _sampler, _coordinates, int2( 0, 1) ).x;
localNormal.y -= _texture.Sample( _sampler, _coordinates, int2( 0, -1) ).x;
localNormal.z = sqrt( 1.0 - localNormal.x * localNormal.x - localNormal.y * localNormal.y );
return localNormal;
}
//--------------------------------------------------------------------------------------
// Calculate local normal using height values from Texture2DArray
//--------------------------------------------------------------------------------------
float3 GetLocalNormal_Array(Texture2DArray _texture, SamplerState _sampler, float3 _coordinates)
{
float3 localNormal;
localNormal.x = _texture.Sample( _sampler, _coordinates, int2( 1, 0) ).w;
localNormal.x -= _texture.Sample( _sampler, _coordinates, int2(-1, 0) ).w;
localNormal.y = _texture.Sample( _sampler, _coordinates, int2( 0, 1) ).w;
localNormal.y -= _texture.Sample( _sampler, _coordinates, int2( 0, -1) ).w;
localNormal.xy *= 5.0 / ( _coordinates.z + 1.0 ); // Scale the normal vector to add relief
localNormal.z = sqrt( 1.0 - localNormal.x * localNormal.x - localNormal.y * localNormal.y );
return localNormal;
}
//--------------------------------------------------------------------------------------
// Calculate a minimum stack level to fetch from
//--------------------------------------------------------------------------------------
float GetMinimumStackLevel(float2 coordinates)
{
float2 distance;
distance.x = abs( coordinates.x - g_StackCenter.x );
distance.x = min( distance.x, 1.0 - distance.x );
distance.y = abs( coordinates.y - g_StackCenter.y );
distance.y = min( distance.y, 1.0 - distance.y );
return max( log2( distance.x * g_ScaleFactor.x * 4.0 ), log2( distance.y * g_ScaleFactor.y * 4.0 ) );
}
//--------------------------------------------------------------------------------------
// Calculate vertex positions for procedural sphere mesh based on an input index buffer
//--------------------------------------------------------------------------------------
PSIn VSMain(VSIn input)
{
PSIn output;
float meridianPart = ( input.index % ( g_SphereMeridianSlices + 1 ) ) / float( g_SphereMeridianSlices );
float parallelPart = ( input.index / ( g_SphereMeridianSlices + 1 ) ) / float( g_SphereParallelSlices );
float angle1 = meridianPart * 3.14159265 * 2.0;
float angle2 = ( parallelPart - 0.5 ) * 3.14159265;
float cos_angle1 = cos( angle1 );
float sin_angle1 = sin( angle1 );
float cos_angle2 = cos( angle2 );
float sin_angle2 = sin( angle2 );
float3 VertexPosition;
VertexPosition.z = cos_angle1 * cos_angle2;
VertexPosition.x = sin_angle1 * cos_angle2;
VertexPosition.y = sin_angle2;
output.position = mul( float4( VertexPosition, 1.0 ), g_ModelViewProj );
output.texCoord = float2( 1.0 - meridianPart, 1.0 - parallelPart );
float3 tangent = float3( cos_angle1, 0.0, -sin_angle1 );
float3 binormal = float3( -sin_angle1 * sin_angle2, cos_angle2, -cos_angle1 * sin_angle2 );
float3 viewVector = normalize(g_EyePosition - VertexPosition);
output.viewVectorTangent.x = dot( viewVector, tangent );
output.viewVectorTangent.y = dot( viewVector, binormal);
output.viewVectorTangent.z = dot( viewVector, VertexPosition );
float3 lightVector = normalize( g_LightPosition );
output.lightVectorTangent.x = dot( lightVector, tangent );
output.lightVectorTangent.y = dot( lightVector, binormal);
output.lightVectorTangent.z = dot( lightVector, VertexPosition );
return output;
}
PSInQuad VSMainQuad(VSIn input)
{
PSInQuad output;
// We don't need to do any calculations here because everything
// is done in the geometry shader.
output.position = 0;
output.texCoord = 0;
return output;
}
[maxvertexcount(4)]
void GSMainQuad( point PSInQuad inputPoint[1], inout TriangleStream<PSInQuad> outputQuad, uint primitive : SV_PrimitiveID )
{
PSInQuad output;
output.position.z = 0.5;
output.position.w = 1.0;
output.texCoord.z = primitive;
float sizeY = 0.3;
float sizeX = sizeY * 1.2 / g_ScreenAspectRatio;
float offset = 0.7 - min( 1.2 / g_StackDepth, sizeY ) * primitive;
output.position.x = -0.9 - sizeX * 0.2;
output.position.y = offset;
output.texCoord.xy = float2( 0.0, 0.0 );
outputQuad.Append( output );
output.position.x = -0.9 + sizeX * 0.8;
output.position.y = offset + sizeY * 0.2;
output.texCoord.xy = float2( 1.0, 0.0 );
outputQuad.Append( output );
output.position.x = -0.9;
output.position.y = offset - sizeY - sizeY * 0.2;
output.texCoord.xy = float2( 0.0, 1.0 );
outputQuad.Append( output );
output.position.x = -0.9 + sizeX;
output.position.y = offset - sizeY;
output.texCoord.xy = float2( 1.0, 1.0 );
outputQuad.Append( output );
outputQuad.RestartStrip();
}
PSOut PS_Trilinear(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
// Calculate base mip level and fractional blending part for trilinear filtering.
float mipLevel = max( log2( d ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate(g_StackDepth - mipLevel);
float diffuse = saturate( input.lightVectorTangent.z );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerLinear, input.texCoord );
// Make early out for cases where we don't need to fetch from clipmap stack
if( blendGlobal == 0.0 )
{
output.color = color0 * diffuse;
}
else
{
// This fractional part defines the factor used for blending
// between two neighbour stack layers
float blendLayers = modf(mipLevel, mipLevel);
blendLayers = saturate(blendLayers);
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
// Here we need to perform proper scaling for input texture coordinates.
// For each layer we multiply input coordinates by g_ScaleFactor / pow( 2, layer ).
// We add 0.5 to result, because our stack center with coordinates (0.5, 0.5)
// starts from corner with coordinates (0, 0) of the original image.
float2 clipTexCoord = input.texCoord / pow( 2, mipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color1 = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, mipLevel ) );
clipTexCoord = input.texCoord / pow( 2, nextMipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color2 = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, nextMipLevel ) );
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal ) * diffuse;
}
return output;
}
PSOut PS_Trilinear_Parallax(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( d ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );
float2 viewVectorTangent = normalize( input.viewVectorTangent ).xy;
float2 scaledViewVector = viewVectorTangent / g_ScaleFactor;
float3 lightVector = normalize(input.lightVectorTangent);
float2 newCoordinates = input.texCoord - scaledViewVector * ( PyramidTextureHM.Sample( samplerLinear, input.texCoord ).x * 0.02 - 0.01 );
float3 normal = GetLocalNormal( PyramidTextureHM, samplerLinear, newCoordinates );
float diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerLinear, newCoordinates ) * diffuse;
if( blendGlobal == 0.0 )
{
output.color = color0;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
blendLayers = saturate( blendLayers );
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
float scale = pow( 2, mipLevel );
float2 clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
float height = StackTexture.Sample( samplerStackLinear, float3(clipTexCoord + 0.5, mipLevel) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale + 0.5;
float4 color1 = StackTexture.Sample( samplerStackLinear, float3( newCoordinates, mipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerStackLinear, float3( newCoordinates, mipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color1 *= diffuse;
scale = pow( 2, nextMipLevel );
clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
height = StackTexture.Sample( samplerStackLinear, float3( clipTexCoord + 0.5, nextMipLevel ) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale + 0.5;
float4 color2 = StackTexture.Sample( samplerStackLinear, float3( newCoordinates, nextMipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerStackLinear, float3( newCoordinates, nextMipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color2 *= diffuse;
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal );
}
return output;
}
PSOut PS_Anisotropic(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float squaredLengthX = dot( dx.x, dx.x ) + dot( dx.y, dx.y );
float squaredLengthY = dot( dy.x, dy.x ) + dot( dy.y, dy.y );
float det = abs(dx.x * dy.y - dx.y * dy.x);
bool isMajorX = squaredLengthX > squaredLengthY;
float squaredLengthMajor = isMajorX ? squaredLengthX : squaredLengthY;
float lengthMajor = sqrt(squaredLengthMajor);
float normMajor = 1.0 / lengthMajor;
if( isMajorX )
squaredLengthMajor = squaredLengthX;
else
squaredLengthMajor = squaredLengthY;
lengthMajor = sqrt( squaredLengthMajor );
float ratioOfAnisotropy = squaredLengthMajor / det;
float lengthMinor = 0;
lengthMinor = ( ratioOfAnisotropy > MAX_ANISOTROPY ) ? lengthMajor / ratioOfAnisotropy : det / lengthMajor;
lengthMinor = max( lengthMinor, 1.0 );
// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( lengthMinor ), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );
float diffuse = saturate( input.lightVectorTangent.z );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerAnisotropic, input.texCoord );
if( blendGlobal == 0.0 )
{
output.color = color0 * diffuse;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
float2 clipTexCoord = input.texCoord / pow( 2, mipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color1 = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord + 0.5, mipLevel ) );
clipTexCoord = input.texCoord / pow( 2, nextMipLevel );
clipTexCoord *= g_ScaleFactor;
float4 color2 = StackTexture.Sample( samplerAnisotropic, float3(clipTexCoord + 0.5, nextMipLevel ) );
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal ) * diffuse;
}
return output;
}
PSOut PS_Anisotropic_Parallax(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float squaredLengthX = dot( dx.x, dx.x ) + dot( dx.y, dx.y );
float squaredLengthY = dot( dy.x, dy.x ) + dot( dy.y, dy.y );
float det = abs( dx.x * dy.y - dx.y * dy.x );
bool isMajorX = squaredLengthX > squaredLengthY;
float squaredLengthMajor = isMajorX ? squaredLengthX : squaredLengthY;
float lengthMajor = sqrt( squaredLengthMajor );
float normMajor = 1.0 / lengthMajor;
if( isMajorX )
squaredLengthMajor = squaredLengthX;
else
squaredLengthMajor = squaredLengthY;
lengthMajor = sqrt( squaredLengthMajor );
float ratioOfAnisotropy = squaredLengthMajor / det;
float lengthMinor = 0;
lengthMinor = ( ratioOfAnisotropy > MAX_ANISOTROPY ) ? lengthMajor / ratioOfAnisotropy : det / lengthMajor;
lengthMinor = max( lengthMinor, 1.0 );
// Calculate base mip level and fractional blending part.
float mipLevel = max( log2( lengthMinor), GetMinimumStackLevel( input.texCoord ) );
float blendGlobal = saturate( g_StackDepth - mipLevel );
float2 viewVectorTangent = normalize( input.viewVectorTangent ).xy;
float2 scaledViewVector = viewVectorTangent / g_ScaleFactor;
float3 lightVector = normalize( input.lightVectorTangent );
float2 newCoordinates = input.texCoord - scaledViewVector * ( PyramidTextureHM.Sample( samplerLinear, input.texCoord ).x * 0.02 - 0.01 );
float3 normal = GetLocalNormal( PyramidTextureHM, samplerAnisotropic, newCoordinates );
float diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
float4 color0 = PyramidTexture.Sample( samplerAnisotropic, newCoordinates ) * diffuse;
if( blendGlobal == 0.0 )
{
output.color = color0;
}
else
{
float blendLayers = modf( mipLevel, mipLevel );
blendLayers = saturate( blendLayers );
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, g_StackDepth - 1 );
mipLevel = clamp( mipLevel, 0, g_StackDepth - 1 );
float scale = pow( 2, mipLevel );
float2 clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
clipTexCoord += 0.5;
float height = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord, mipLevel ) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale;
float4 color1 = StackTexture.Sample( samplerAnisotropic, float3( newCoordinates, mipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerAnisotropic, float3( newCoordinates, mipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color1 *= diffuse;
scale = pow( 2, nextMipLevel );
clipTexCoord = input.texCoord / scale;
clipTexCoord *= g_ScaleFactor;
clipTexCoord += 0.5f;
height = StackTexture.Sample( samplerAnisotropic, float3( clipTexCoord, nextMipLevel ) ).w * 0.02 - 0.01;
newCoordinates = clipTexCoord - viewVectorTangent * height / scale;
float4 color2 = StackTexture.Sample( samplerAnisotropic, float3( newCoordinates, nextMipLevel ) );
normal = GetLocalNormal_Array( StackTexture, samplerAnisotropic, float3( newCoordinates, nextMipLevel ) );
diffuse = saturate( dot( lightVector, normal ) );
diffuse = max( diffuse, 0.05 );
color2 *= diffuse;
output.color = lerp( color0, lerp( color1, color2, blendLayers ), blendGlobal );
}
return output;
}
//--------------------------------------------------------------------------------------
// Calculate color values to show
//--------------------------------------------------------------------------------------
PSOut PS_Color(PSIn input)
{
PSOut output;
// Calculate texture coordinates gradients.
float2 dx = ddx( input.texCoord * g_TextureSize.x );
float2 dy = ddy( input.texCoord * g_TextureSize.y );
float d = max( sqrt( dot( dx.x, dx.x ) + dot( dx.y, dx.y ) ) , sqrt( dot( dy.x, dy.x ) + dot( dy.y, dy.y ) ) );
// Calculate base mip level and fractional blending part.
float mipLevel = log2( d );
float blendLayers = modf( mipLevel, mipLevel );
int mipBoundary = min( g_StackDepth, MIP_LEVELS_MAX ) - 1;
int nextMipLevel = mipLevel + 1;
nextMipLevel = clamp( nextMipLevel, 0, mipBoundary );
mipLevel = clamp( mipLevel, 0, mipBoundary );
output.color.xyz = lerp( g_MipColors[mipLevel], g_MipColors[nextMipLevel], blendLayers );
return output;
}
PSOut PSQuad(PSInQuad input)
{
PSOut output;
float width = 0.995 - input.texCoord.x;
width = saturate( width * 50.0 );
output.color.xyz = lerp( g_MipColors[input.texCoord.z], StackTexture.Sample( samplerStackLinear, input.texCoord ), width);
output.color.w = 1.0;
return output;
}
//--------------------------------------------------------------------------------------
// Compiled shaders used in different techniques
//--------------------------------------------------------------------------------------
VertexShader vsCompiled = CompileShader( vs_4_0, VSMain() );
VertexShader vsCompiledQuad = CompileShader( vs_4_0, VSMainQuad() );
GeometryShader gsCompiledQuad = CompileShader( gs_4_0, GSMainQuad() );
PixelShader ps_Trilinear = CompileShader( ps_4_0, PS_Trilinear() );
PixelShader ps_Trilinear_Parallax = CompileShader( ps_4_0, PS_Trilinear_Parallax() );
PixelShader ps_Anisotropic = CompileShader( ps_4_0, PS_Anisotropic() );
PixelShader ps_Anisotropic_Parallax = CompileShader( ps_4_0, PS_Anisotropic_Parallax() );
PixelShader ps_Color = CompileShader( ps_4_0, PS_Color() );
PixelShader psComiledQuad = CompileShader( ps_4_0, PSQuad() );
technique10 Trilinear
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Trilinear );
SetRasterizerState(RStateMSAA);
}
pass p1
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Trilinear_Parallax );
SetRasterizerState(RStateMSAA);
}
}
technique10 Anisotropic
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Anisotropic );
SetRasterizerState(RStateMSAA);
}
pass p1
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Anisotropic_Parallax );
SetRasterizerState(RStateMSAA);
}
}
technique10 ColoredMips
{
pass p0
{
SetVertexShader( vsCompiled );
SetGeometryShader( NULL );
SetPixelShader( ps_Color );
SetRasterizerState(RStateMSAA);
}
}
technique10 StackDrawPass
{
pass p0
{
SetVertexShader( vsCompiledQuad );
SetGeometryShader( gsCompiledQuad );
SetPixelShader( psComiledQuad );
SetRasterizerState(RStateMSAA);
}
}
//----------------------------------------------------------------------------------
// File: JPEG_Preprocessor.fx
// Author: Evgeny Makarov
// Email: sdkfeedback@nvidia.com
//
// Copyright (c) 2007 NVIDIA Corporation. All rights reserved.
//
// TO THE MAXIMUM EXTENT PERMITTED BY APPLICABLE LAW, THIS SOFTWARE IS PROVIDED
// *AS IS* AND NVIDIA AND ITS SUPPLIERS DISCLAIM ALL WARRANTIES, EITHER EXPRESS
// OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, IMPLIED WARRANTIES OF MERCHANTABILITY
// AND FITNESS FOR A PARTICULAR PURPOSE. IN NO EVENT SHALL NVIDIA OR ITS SUPPLIERS
// BE LIABLE FOR ANY SPECIAL, INCIDENTAL, INDIRECT, OR CONSEQUENTIAL DAMAGES
// WHATSOEVER (INCLUDING, WITHOUT LIMITATION, DAMAGES FOR LOSS OF BUSINESS PROFITS,
// BUSINESS INTERRUPTION, LOSS OF BUSINESS INFORMATION, OR ANY OTHER PECUNIARY LOSS)
// ARISING OUT OF THE USE OF OR INABILITY TO USE THIS SOFTWARE, EVEN IF NVIDIA HAS
// BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
//
//
//----------------------------------------------------------------------------------
Texture2D TextureDCT;
Texture2D<uint1> QuantTexture;
Texture2D RowTexture1;
Texture2D RowTexture2;
Texture2D ColumnTexture1;
Texture2D ColumnTexture2;
Texture2D TargetTexture;
Texture2D TextureY;
Texture2D TextureCb;
Texture2D TextureCr;
Texture2D TextureHeight;
SamplerState samplerPoint
{
Filter = MIN_MAG_MIP_POINT;
AddressU = Wrap;
AddressV = Wrap;
};
SamplerState samplerLinear
{
Filter = MIN_MAG_MIP_LINEAR;
AddressU = Wrap;
AddressV = Wrap;
};
BlendState NoBlending
{
BlendEnable[0] = FALSE;
};
struct VSIn
{
uint index : SV_VertexID;
};
struct PSIn
{
float4 position : SV_Position;
float3 texCoord : TEXCOORD0;
};
struct PSOut
{
float4 color : SV_Target;
};
struct PSOutMRT
{
float4 color0 : SV_Target0;
float4 color1 : SV_Target1;
};
cbuffer cb0
{
float g_RowScale;
float g_ColScale;
};
PSIn VS_Quad(VSIn input)
{
PSIn output;
output.position = 0;
output.texCoord = 0;
return output;
}
[maxvertexcount(4)]
void GS_Quad( point PSIn inputPoint[1], inout TriangleStream<PSIn> outputQuad, uint primitive : SV_PrimitiveID )
{
PSIn output;
output.position.z = 0.5;
output.position.w = 1.0;
output.texCoord.z = primitive;
output.position.x = -1.0;
output.position.y = 1.0;
output.texCoord.xy = float2( 0.0, 0.0 );
outputQuad.Append( output );
output.position.x = 1.0;
output.position.y = 1.0;
output.texCoord.xy = float2( 1.0, 0.0 );
outputQuad.Append( output );
output.position.x = -1.0;
output.position.y = -1.0;
output.texCoord.xy = float2( 0.0, 1.0 );
outputQuad.Append( output );
output.position.x = 1.0;
output.position.y = -1.0;
output.texCoord.xy = float2( 1.0, 1.0 );
outputQuad.Append( output );
outputQuad.RestartStrip();
}
/////////////////////////////////////////////////////////////////////////////
// JPEG Decompression
// IDCT based on Independent JPEG Group code
/////////////////////////////////////////////////////////////////////////////
PSOutMRT PS_IDCT_Rows( PSIn input )
{
PSOutMRT output;
float d[8];
// Read row elements
d[0] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -4, 0 ) );
d[1] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -3, 0 ) );
d[2] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -2, 0 ) );
d[3] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( -1, 0 ) );
d[4] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 0, 0 ) );
d[5] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 1, 0 ) );
d[6] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 2, 0 ) );
d[7] = 128.0 * TextureDCT.Sample( samplerPoint, input.texCoord, int2( 3, 0 ) );
// Perform dequantization
d[0] *= QuantTexture.Sample( samplerPoint, float2( 0.0625, input.texCoord.y * g_ColScale ) );
d[1] *= QuantTexture.Sample( samplerPoint, float2( 0.1875, input.texCoord.y * g_ColScale ) );
d[2] *= QuantTexture.Sample( samplerPoint, float2( 0.3125, input.texCoord.y * g_ColScale ) );
d[3] *= QuantTexture.Sample( samplerPoint, float2( 0.4375, input.texCoord.y * g_ColScale ) );
d[4] *= QuantTexture.Sample( samplerPoint, float2( 0.5625, input.texCoord.y * g_ColScale ) );
d[5] *= QuantTexture.Sample( samplerPoint, float2( 0.6875, input.texCoord.y * g_ColScale ) );
d[6] *= QuantTexture.Sample( samplerPoint, float2( 0.8125, input.texCoord.y * g_ColScale ) );
d[7] *= QuantTexture.Sample( samplerPoint, float2( 0.9375, input.texCoord.y * g_ColScale ) );
float tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
float tmp10, tmp11, tmp12, tmp13;
float z5, z10, z11, z12, z13;
tmp0 = d[0];
tmp1 = d[2];
tmp2 = d[4];
tmp3 = d[6];
tmp10 = tmp0 + tmp2;
tmp11 = tmp0 - tmp2;
tmp13 = tmp1 + tmp3;
tmp12 = (tmp1 - tmp3) * 1.414213562 - tmp13;
tmp0 = tmp10 + tmp13;
tmp3 = tmp10 - tmp13;
tmp1 = tmp11 + tmp12;
tmp2 = tmp11 - tmp12;
tmp4 = d[1];
tmp5 = d[3];
tmp6 = d[5];
tmp7 = d[7];
z13 = tmp6 + tmp5;
z10 = tmp6 - tmp5;
z11 = tmp4 + tmp7;
z12 = tmp4 - tmp7;
tmp7 = z11 + z13;
tmp11 = (z11 - z13) * 1.414213562;
z5 = (z10 + z12) * 1.847759065;
tmp10 = 1.082392200 * z12 - z5;
tmp12 = -2.613125930 * z10 + z5;
tmp6 = tmp12 - tmp7;
tmp5 = tmp11 - tmp6;
tmp4 = tmp10 + tmp5;
output.color0.x = tmp0 + tmp7;
output.color1.w = tmp0 - tmp7;
output.color0.y = tmp1 + tmp6;
output.color1.z = tmp1 - tmp6;
output.color0.z = tmp2 + tmp5;
output.color1.y = tmp2 - tmp5;
output.color1.x = tmp3 + tmp4;
output.color0.w = tmp3 - tmp4;
return output;
}
PSOut PS_IDCT_Unpack_Rows( PSIn input )
{
PSOut output;
// Get eight values storded in 2 textures
float4 values1 = RowTexture1.Sample( samplerPoint, input.texCoord );
float4 values2 = RowTexture2.Sample( samplerPoint, input.texCoord );
// Calculate a single non-zero index to define an element to be used
int index = frac( input.texCoord.x * g_RowScale ) * 8.0;
float4 indexMask1 = ( index == float4( 0, 1, 2, 3 ) );
float4 indexMask2 = ( index == float4( 4, 5, 6, 7 ) );
output.color = dot( values1, indexMask1 ) + dot( values2, indexMask2 );
return output;
}
PSOutMRT PS_IDCT_Columns( PSIn input )
{
PSOutMRT output;
float d[8];
// Read column elements
d[0] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -4 ) );
d[1] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -3 ) );
d[2] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -2 ) );
d[3] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, -1 ) );
d[4] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 0 ) );
d[5] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 1 ) );
d[6] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 2 ) );
d[7] = TargetTexture.Sample( samplerPoint, input.texCoord, int2( 0, 3 ) );
float tmp0, tmp1, tmp2, tmp3, tmp4, tmp5, tmp6, tmp7;
float tmp10, tmp11, tmp12, tmp13;
float z5, z10, z11, z12, z13;
tmp0 = d[0];
tmp1 = d[2];
tmp2 = d[4];
tmp3 = d[6];
tmp10 = tmp0 + tmp2;
tmp11 = tmp0 - tmp2;
tmp13 = tmp1 + tmp3;
tmp12 = (tmp1 - tmp3) * 1.414213562 - tmp13;
tmp0 = tmp10 + tmp13;
tmp3 = tmp10 - tmp13;
tmp1 = tmp11 + tmp12;
tmp2 = tmp11 - tmp12;
tmp4 = d[1];
tmp5 = d[3];
tmp6 = d[5];
tmp7 = d[7];
z13 = tmp6 + tmp5;
z10 = tmp6 - tmp5;
z11 = tmp4 + tmp7;
z12 = tmp4 - tmp7;
tmp7 = z11 + z13;
tmp11 = (z11 - z13) * 1.414213562;
z5 = (z10 + z12) * 1.847759065;
tmp10 = 1.082392200 * z12 - z5;
tmp12 = -2.613125930 * z10 + z5;
tmp6 = tmp12 - tmp7;
tmp5 = tmp11 - tmp6;
tmp4 = tmp10 + tmp5;
output.color0.x = tmp0 + tmp7;
output.color1.w = tmp0 - tmp7;
output.color0.y = tmp1 + tmp6;
output.color1.z = tmp1 - tmp6;
output.color0.z = tmp2 + tmp5;
output.color1.y = tmp2 - tmp5;
output.color1.x = tmp3 + tmp4;
output.color0.w = tmp3 - tmp4;
return output;
}
PSOut PS_IDCT_Unpack_Columns( PSIn input )
{
PSOut output;
// Get eight values storded in 2 textures
float4 values1 = ColumnTexture1.Sample( samplerPoint, input.texCoord );
float4 values2 = ColumnTexture2.Sample( samplerPoint, input.texCoord );
// Calculate a single non-zero index to define an element to be used
int index = frac( input.texCoord.y * g_ColScale ) * 8.0;
float4 indexMask1 = ( index == float4( 0, 1, 2, 3 ) );
float4 indexMask2 = ( index == float4( 4, 5, 6, 7 ) );
output.color = clamp( ( dot( values1, indexMask1 ) + dot( values2, indexMask2 ) ) * 0.125 + 128.0, 0.0, 256.0 );
return output;
}
PSOut PS_IDCT_RenderToBuffer( PSIn input )
{
PSOut output;
float Y = TextureY.Sample( samplerPoint, input.texCoord );
float Cb = TextureCb.Sample( samplerPoint, input.texCoord );
float Cr = TextureCr.Sample( samplerPoint, input.texCoord );
// Convert YCbCr -> RGB
output.color.x = Y + 1.402 * ( Cr - 128.0 );
output.color.y = Y - 0.34414 * ( Cb - 128.0 ) - 0.71414 * ( Cr - 128.0 );
output.color.z = Y + 1.772 * ( Cb - 128.0 );
output.color.w = TextureHeight.Sample( samplerPoint, input.texCoord );
output.color.xyzw *= ( 1.0 / 256.0 );
return output;
}
//--------------------------------------------------------------------------------------
// Compiled shaders used in different techniques
//--------------------------------------------------------------------------------------
VertexShader VS_Quad_Compiled = CompileShader( vs_4_0, VS_Quad() );
GeometryShader GS_Quad_Compiled = CompileShader( gs_4_0, GS_Quad() );
technique10 JPEG_Decompression
{
pass p0
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Rows() ) );
}
pass p1
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Unpack_Rows() ) );
}
pass p2
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Columns() ) );
}
pass p3
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_Unpack_Columns() ) );
}
pass p4
{
SetVertexShader( VS_Quad_Compiled );
SetGeometryShader( GS_Quad_Compiled );
SetPixelShader( CompileShader( ps_4_0, PS_IDCT_RenderToBuffer() ) );
}
}

NV SDK 10 (1) Clipmaps的更多相关文章
- 【Android类型SDK测试(二)】环境基础
(一)语言 Android使用的Java语言,所以要测试Android类型的SDK,Java的基础知识还是需要的. 另外,Android中有NDK类型的编程,需要知道C相关的知识. (二)环境准备 A ...
- 【Android类型SDK测试(一)】认识Android类型的 SDK
(一)SDK是个什么东东 接触软件相关行业的同学都应该知道,SDK(即 Software Development Kit),软件开发包.其作用就是为开发某些软件提供一些便利的东西,包括工具 集合,文档 ...
- NOIP2017提高组模拟赛 10 (总结)
NOIP2017提高组模拟赛 10 (总结) 第一题 机密信息 FJ有个很奇怪的习惯,他把他所有的机密信息都存放在一个叫机密盘的磁盘分区里,然而这个机密盘中却没有一个文件,那他是怎么存放信息呢?聪明的 ...
- php面试题10(复习)
php面试题10(复习) 一.总结 复习 二.php面试题10 21.谈谈 asp,php,jsp 的优缺点(1 分)(asp要钱,jsp学习成本大)答:ASP 全名 Active Server Pa ...
- 新浪微博SDK开发(1):总述
花了几天时间,消耗了九牛六虎之力,新浪微博大部分API已经封装,但有部分API实在太难封装. 说起这封装,我必须严重地.从人品和技术层面鄙视一下新浪的程序员,实在太菜了.估计菜鸟都被大企业吸收了,菜到 ...
- 第一章 工欲善其事 必先利其器—Android SDK工具(3)
1.3没有真机一样开发--Android模拟器 有些时候,我们手头上可能并没有符合要求的Android设备.那么这时候我们是不是对调试或者开发就一筹莫展了呢?当然不是.由于我们有Android模拟器. ...
- 第一章 工欲善其事 其利润—Android SDK工具(2)
1.2设备管理工具-调试桥(ADB) 1.2.1ADB简单介绍 ADB全称是Android Debug Bridge,是Android SDK里自带的一个工具,用这个工具能够直接操作管理Android ...
- 基于C#的钉钉SDK开发(1)--对官方SDK的重构优化
在前段时间,接触一个很喜欢钉钉并且已在内部场景广泛使用钉钉进行工厂内部管理的客户,如钉钉考勤.日常审批.钉钉投影.钉钉门禁等等方面,才体会到原来钉钉已经已经在企业上可以用的很广泛的,因此回过头来学习研 ...
- 【转】VC++10(VS2010)IDE各种使用技巧
原文网址:http://www.cnblogs.com/sunrisezhang/articles/2802397.html 一个好的coder,他首先必须是一个熟练工.对于C++程序员来说,只有掌握 ...
随机推荐
- 文本框value联动修改
<input id="ipt-edit" value="" /><input id="ipt-target" value= ...
- Mybatis Collection查询集合只出现一条数据
1.原因 如果两表联查,主表和明细表的主键都是id的话,明细表的多条只能查询出来第一条. 2.解决办法 级联查询的时候,主表和从表有一样的字段名的时候,在mysql上命令查询是没问题的.但在mybat ...
- Andorid实现点击获取验证码倒计时效果
这篇文章主要介绍了Andorid实现点击获取验证码倒计时效果,这种效果大家经常遇到,想知道如何实现的,请阅读本文 我们在开发中经常用到倒计时的功能,比如发送验证码后,倒计时60s再进行验证码的获取 ...
- springMVC 学习(一)
本文主要介绍springmvc的框架原理,并通过一个入门程序展示环境搭建,配置以及部署调试. springmvc是spring框架的一个模块,springmvc和spring无需通过中间整合层进行整合 ...
- iOS10 SiriKit QQ适配详解
原文连接 1. 概述 苹果在 iOS10 开放了 SiriKit 接口给第三方应用.目前,QQ已经率先适配了 Siri 的发消息和打电话功能.这意味着在 iOS10 中你可以直接告诉 Siri 让它帮 ...
- 【SSM】Eclipse使用Maven创建Web项目+整合SSM框架
自己接触ssm框架有一段时间了,从最早的接触新版ITOO项目的(SSM/H+Dobbu zk),再到自己近期来学习到的<淘淘商城>一个ssm框架的电商项目.用过,但是还真的没有自己搭建过, ...
- 【tomcat ecplise】新下载一个tomcat,无法成功启动,或者启动了无法访问localhost:8080页面/ecplise无法添加新的tomcat/ecplise启动tomcat启动不起来
今天转头使用ecplise,于是新下载一个tomcat7来作为服务器使用 但是问题来了: [问题1:全新的tomcat启动即消耗了不可思议的时间,并且启动了之前其他tomcat中的很多项目] [注意: ...
- cve-2015-5122漏洞分析
HackTem爆出的第二枚0day,分析了下,做个记录. Poc中一开始会分配一个Array类型的_ar结构. 第一次赋值 此时在a[0 ] –a[1e-1] 处已被赋值为Vector.<uin ...
- ThinkPHP的D方法和M方法的区别
M方法和D方法的区别 ThinkPHP 中M方法和D方法都用于实例化一个模型类,M方法 用于高效实例化一个基础模型类,而 D方法 用于实例化一个用户定义模型类. 使用M方法 如果是如下情况,请考虑使用 ...
- CustomEvent自定义事件
javascript与HTML之间的交互是通过事件来实现的.事件,就是文档或浏览器窗口发生的一些特定的交互瞬间.通常大家都会认为事件是在用户与浏览器进行交互的时候触发的,其实通过javascript我 ...