Xbox LIVE Indie Games
Sort Discussions: Previous Discussion Next Discussion
Page 1 of 1 (14 posts)

Reading/Writing Dxt textures

Last post 5/22/2009 4:53 AM by jwatte. 13 replies.
  • 4/13/2008 1:45 AM

    Reading/Writing Dxt textures

    Ok, here's my story.  So I'm working on the next version of Scurvy Media.  One of the things that I'd finally like to get in is the storage of texture/frame data as DXT compressed textures.  Unfortunately, the math required to read those textures has escaped me thus far.  I'm sure I'm missing something fundamental, so I figured I'd finally just ask those who know much more than I :-)

    Here's what I have:
    First, in my importer/processor, I load a Texture2DContent object that I then convert to a Dxt3BitmapContent pixel format
    Texture2DContent c = new Texture2DContent();
    BitmapContent b = new PixelBitmapContent<Microsoft.Xna.Framework.Graphics.PackedVector.Bgr565>(bitmap.Width, bitmap.Height);

    Then, in the content writer, I write each "frame" to the XNB file
    byte[] pix = tex.Faces[0][0].GetPixelData();

    Then, at runtime in the content reader, I stream the pixel data out of the file stream and try to set it to the texture:
    int frameSize = reader.ReadInt32();
    if (Data == null)
    Data = new byte[frameSize];
    Pixels = new Bgr565[frameSize / 2];
    _stream.Read(Data, 0, frameSize);

    int currentPixel = 0;
    for (int i = 0; i < Data.Length; i += 2)
    Pixels[currentPixel].PackedValue = (ushort)((Data[i + 1] << 8) + DataIdea);



    Now at the risk of exposing my ignorance, I'm sure the problem lies in the way that I am reading the data from the stream.  A good friend of mine helped me write that piece in the first place (for which I am deeply thankful :-P ).  So the question is, what's the right way of reading out the dxt data and setting it to the texture?

    Some stats:
    • Video resolution: 160 x 120
    • Size of the byte array that gets written to the XNB for one frame: 19200
    • Size of the Bgr565[] Array that I'm using: 9600
    • If I comment out the 'ConvertBitmapType' line, this all works without a hitch.
    Again, I will reitorate, I'm 100% certain that I'm going about this the wrong way ... some guidance would be very much appreciated :-)  Thanks in advance!!
  • 4/13/2008 2:49 PM In reply to

    Re: Reading/Writing Dxt textures

    This approach looks sane to me, although I'd probably use DXT1 rather than DXT3 format (assuming you don't have alpha channels in the video, that will only require half the output size).

    DXT formats store data in 4x4 blocks. DXT1 uses 8 bytes per block, while 2-5 use 16. It's tempting to say DXT1 uses half a byte per texel and the others 1 byte per texel, but the block structure is important if your texture has mipmaps, since the < 4 size mip levels get rounded up to an entire block.

    So it makes sense that in DXT3 format, your 160x120 image will require 40x30 blocks * 16 bytes per block = 19200 bytes.

    I don't know why this would fail using a Bgr565 array (I've never tried that) but I'd be inclined to read the data in and set it from an array of 19200 bytes, rather than using the packed vector type here. Theoretically both should give the same result, but why complicate things by using a specific format that doesn't actually match the type of your data?

    What actually goes wrong when you run this code?
  • 4/13/2008 9:16 PM In reply to

    Re: Reading/Writing Dxt textures

    I get an InvalidOperationException with the following message:

    "The size of the data passed in is too large or too small for this resource."

    However, taking your advice, I changed it so that the surface format of the texture is dxt3 (I'll switch to using dxt1 as per your suggestion, but i figured I'd take it one step at a time :-) ), and also to just pass in the bytes directly.  I now get something rendering :-D however, it's a little funky.

    The top 1/4 of the video plays perfectly, but the bottom part of the video looks all jacked up as in the screenshot above.  Any thoughts as to what the culprit might be?
  • 4/21/2008 3:48 AM In reply to

    Re: Reading/Writing Dxt textures

    I hate to be tacky and bump my own post ... but **bump** :-)

    but seriously ... at a very minimum, I'd love to hear some thoughts about how I might debug this.  to review, it works perfectly fine when I don't convert to dxt.  But switching to dxt gives me a texture like the screenshot above.

    Once I get this, scurvy media will just about be ready for a new release that will definitely support larger videos (since the frames are now compressed).  I've gotten the disk streaming code multithreaded, and am just about done double buffering it to avoid stalls.  I also have it running on the xbox 360, though I need to do some perf tuning there.

    help me obi-wan, you're my only hope! :-P  thanks!!
  • 4/21/2008 1:45 PM In reply to

    Re: Reading/Writing Dxt textures

    Beats me!

    My first guess would be that you're somehow only setting 1/4 the neccessary amount of data into this texture, and getting uninitialized garbage in the rest, but it isn't immediately obvious from your code how that could be happening.
  • 4/21/2008 4:25 PM In reply to

    Re: Reading/Writing Dxt textures

    That won't work. You are converting data to DXT3, and then you're reading that in and trying to use it as 565. When you convert the data to a new format, that's what you're getting in the bytes.

    I would second the recommendation to use DXT1, by the way. 565 has no alpha, so the DXT3 format is totally unnecessary.
  • 4/21/2008 5:40 PM In reply to

    Re: Reading/Writing Dxt textures

    Oops, I should have mentioned, I have in fact switched to using dxt1 with just about the same result (except that now the top half of the texture renders correctly, while the bottom half is gibberish).
    //In the Writer
    Texture2DContent tex;
    _size = new Vector2();
    _size.Y = (float)tex.Faces[0][0].Height;
    _size.X = (float)tex.Faces[0][0].Width;
    byte[] pix = tex.Faces[0][0].GetPixelData();

    And then at runtime
    //Initializing the Video
    _texture = new Texture2D(gdm.GraphicsDevice, _size.X, _size.Y, 1, TextureUsage.None, SurfaceFormat.Dxt1);
    byte[] data = new byte[dataLength];

    //Streaming the data off disk
    _stream.Read(data, 0, dataLength);

    So as you can see, I'm using (as far as I can tell) the same bitmap type both when I convert it in the pipeline, and when I initialize the texture @ runtime.
  • 4/27/2008 10:30 PM In reply to

    Re: Reading/Writing Dxt textures

    Still encountering this issue, however, I finally got off my lazy bum and cleaned up my code for a new Scurvy Media release.  I included the ability to compress the frame data using dxt1 as a processor Property ("Use Compression").  If you download the latest release (binary and source included in the release) and set the sample project's use compression flag to true, you will see the behavior described above.
  • 4/28/2008 11:29 AM In reply to

    Re: Reading/Writing Dxt textures

    That's bizarre.

    It would be worth printing out the texture resolution and data length counts both inside the pipeline and at runtime, as a sanity check to make sure these are the same. I don't see how that code could be failing unless something is somehow going missing en-route.
  • 4/28/2008 11:30 AM In reply to

    Re: Reading/Writing Dxt textures

    Oooh... one more thing to check: stream.Read is allowed to read less than the requested amount: you should check the return value from this and loop if it hasn't read as much as you asked for (yeah, a crazy behavior IMHO, but that's the way it is for some reason :-)
  • 4/28/2008 2:36 PM In reply to

    Re: Reading/Writing Dxt textures

    Shawn Hargreaves:
    (yeah, a crazy behavior IMHO, but that's the way it is for some reason :-)

    That "crazyness" allows to read a file in fixed chunks without having to check that the files contents is a multiple of said chunk size. But the reading-less-than-request usually happens at the end of the file only, though :)

  • 5/19/2008 9:12 AM In reply to

    Re: Reading/Writing Dxt textures

    ok, here's the latest :-)

    I did change the file stream code to ensure that I read all bytes.  no change.  I kind of gave up on trying to get the compression working for a bit to try to get the uncompressed version to work on xbox 360 (as the color order on 360 is apparently different).  That's been no end of frustration.

    But last night on the #xna channel, this quote comes up from the SurfaceFormat enumeration documentation:
    DXT1 compression texture format. The runtime will not allow an application to create a surface using a DXTn format unless the surface dimensions are multiples of 4. This applies to offscreen-plain surfaces, render targets, 2D textures, cube textures, and volume textures.

    So I take it to mean that the issues I've been seeing might be because the texture height/width isn't a power of 4?  If so, I can probably just resize the texture in the pipeline and provide some sort of display area rectangle.

    Would this work?  am I going to have to worry about cross platform issues on xbox like I have with the other surfaceformats.

    Thank you so much for everyone's help so far :-)
  • 5/22/2009 3:51 AM In reply to

    Re: Reading/Writing Dxt textures

    Hi Joel!

    I don't know if you got around this, but here is the way I'm using it:

    In the content writer, just use:

    Texture2DContent texture;

    Then, in the content reader, use:

    Texture2D texture;
    texture = input.ReadObject<Texture2D>()

    The XNA engine will convert it automagically.
    You don't have to assing a graphic device to the Texture2D either because it is done automagically too.
  • 5/22/2009 4:53 AM In reply to

    Re: Reading/Writing Dxt textures

    It's a multiple of 4, not a power of 4. 160 and 120 are both multiples of 4.

    The DXT1 format is a fairly simple paletted format. The first two half-words are "color A" and "color B" in 5:6:5 format. The next 4 bytes contain one scanline of pixel data each, two bits per pixel.

    If "colorA" is numerically larger than "colorB" then the palette entries are A, B, (A+B)/2 and 0. If "colorA" is numerically less than "colorB" then the palette entries are A, B, (A*2+B)/3 and (A+B*2)/3.

    The main difference on the Xbox would probably be if the color A and color B half-words are stored big-endian instead of little-endian, although it just might be the case that it would flip entire words (32 bits) instead of the first two half-words only -- experimentation would tell you for sure.

    For example, the following eight bytes will define a 4x4 texture that fades from black on top to white on bottom:


    And, because the PC is little-endian, the data is actually stored fully word-flipped, so if you dump a DDS with black on top and white at bottom, you actually get the following data:

    ff ff 00 00 55 ff aa 00

    For the DXT3 and DXT5 formats, the color part is the same, but each block is 16 bytes, and has 8 bytes of alpha info followed by 8 bytes of color info. For DXT3, each alpha pixel is simply 4 bits of alpha, which get expanded to 8. For DXT5, it's a palette, where the first two bytes are value A and value B, and where the six bytes after that contains three bits per pixel that interpolate between those two values. As a final gnarl, if value A is greater than value B, then two of the interpoland values mean "fully opaque" and "fully white" whereas the rest of the values interpolate between A and B.

    The full description of this encoding is actually found on MSDN, if you look for "DXT" and "DDS" documentation.

Page 1 of 1 (14 posts) Previous Discussion Next Discussion