Can anyone point me toward a good, easy to understand bilinear resample routine? I essentially have an array of bits, say a texture image at 256,256. I want to be able to resize it to anything, with a nice filter to it.
This is for a software-only mode so I can't use the DX routines.
hmm, no tutorial link here, but bilinear won't help you very much with scale <0.5 and >2.0 (sure, it will look way better then without it, but in many cases will still look bad when you use a scale like 4.1 or 0.2)
What biliear does is this:
if you have got 10 points (0..9) and you want to rescale them to 11 points (0..10) you want to take point 0,0.9,1.8,2.7,3.6,4.5,5.4,6.3,7.2,8.1,9.0 and draw them as points 0,1,2,3,4,5,6,7,8,9,10 on the new texture.
So what bilinear does (in 1D) is:
for 0 =pixel0*1.0f+pixel1*0.0f
The same goes for 2d (but you obviously take 4 pixels)
So in short it gets the pixel color by linearly calculating a color between 4 pixels
So your resize in 1d would look like
we have: width (originalwidth,newwidth) and 1d data (from,to)
for (int newx=0;newx<newwidth;newx++)
float pos=originalx=originalx-(float)originalx2; (so if originalx=2.3 pos=0.3)
That's about how it works in 1d (you should obviously use fixedpoint math if you care for speed)
fixed point isn't any faster on a PC is it? Thought those days were long gone?
btw here is some source i have from flipcode:
Here's an image shrinking algorithm I wrote in delphi for resizing a bitmap using pixel averaging. Unlike bilinear or bicubic it uses all available information from the source image to create a high quality result. For example, if you resize by a factor of 4 then each pixel in the result image will be calculated from a 4 by 4 box of pixels in the original image. It's not as fast as other resizing algorithms but I speeded it up quite a bit by storing scanline pointers in an array.
I wrote it while making this image resizing software
TRGBTripleArray = array[0..32767] of TRGBTriple;
PRGBTripleArray = ^TRGBTripleArray;
// shrink a bitmap by a given ratio
procedure Shrink(ARatio : Real ; ABitmap, ABitmapOut : TBitmap);
Lx, Ly : integer;
LyBox, LxBox, LyBox1, LyBox2, LxBox1, LxBox2 : integer;
TR, TG, TB : integer;
avR, avG, avB : integer;
LRowIn, LRowOut : pRGBTripleArray;
LBoxCount : integer;
LRowBytes : integer;
LBoxRows : array of pRGBTripleArray;
LRowOut := ABitmapOut.ScanLine;
LRowBytes := Integer(ABitmapOut.Scanline) - Integer(LRowOut);
for Ly := 0 to ABitmapOut.Height-1 do begin
LyBox1 := trunc(Ly*ARatio);
LyBox2 := trunc((Ly+1)*ARatio) - 1;
for LyBox := LyBox1 to LyBox2 do
LBoxRows[LyBox-LyBox1] := ABitmap.ScanLine[LyBox];
for Lx := 0 to ABitmapOut.Width-1 do begin
LxBox1 := trunc(Lx*ARatio);
LxBox2 := trunc((Lx+1)*ARatio) - 1;
TR := 0; TG := 0; TB := 0;
LBoxCount := 0;
for LyBox := LyBox1 to LyBox2 do begin
LRowIn := LBoxRows[LyBox-LyBox1];
for LxBox := LxBox1 to LxBox2 do begin
TR := TR + LRowIn[LxBox].rgbtRed;
TG := TG + LRowIn[LxBox].rgbtGreen;
TB := TB + LRowIn[LxBox].rgbtBlue;
avR := TR div LBoxCount;
avG := TG div LBoxCount;
avB := TB div LBoxCount;
LRowOut[Lx].rgbtBlue := avB;
LRowOut[Lx].rgbtGreen := avG;
LRowOut[Lx].rgbtRed := avR;
if i remember right what it does is to take into account all pixels in image (and not only 2x2 pixel blocks line in bilinear) whie doing a resize... should have high quality no matter what scale you use (ie 0.1 or 0.5)
it might not be faster for calculations... but when you use it all the time as pointer to data you would have to do tons of floating->int translations which aren't cheap.
Another nice trick about fixed is that you can use shifts (as much faster muls) and retrieve the fraction part from a value.
I would say you can get the fixedpoint implementation working way faster.
The bad thing is that you lose a bit on quality.. but very very little (or none in this example if you use something like 16:16 fixedpoint)
btw. If you want to use this code for creating mipmaps (ie 64z64 and 128x128 from a 256x256 texture then use the second method (not bilinear) and I would guess that it would be better to create the 64x64 maps from 128x128 image - faster with similar quality)
Last edited by MirekCz; 10-29-2004 at 04:05 AM.
Checkout the Prophecy SDK, or the more recent 'Fusion' from Twilight3D. Allowing you to do bilinear or bicubic resizing in software, generation of mipmap chains, .... All in neat and elegant source code with a very liberal license.
Game Developer ran an article a few years ago about 'Gaussian resampling' for reducing textures.
Bilinear resampling down involves blurring as well - you lose a lot of detail. the Gaussian method had some tweaks to preserve contrast and features resulting in a much nicer image.
Not much use without the article or samples I know
hmm, I have got the GDM 4cd collection but can't remember to see it there... anyone knows if it's there?
Without getting fancy, bilinear filtering it just the blending of a texel and it's 8 neighbors. Or if you want, it's 4 neighbors (up, down, left, right) - although that looks worse.
'Zoom' (Paul Heckbert, 1995) implements a variety of separable antialiased filters for resizing and resampling images. Includes source code.
Run-Time MIP-Map Filtering (Andrew Flavell, 1998).
Image Scaling paper (Peter Chien and Emily Kwok, 1999).
Bicubic Interpolation for Image Scaling (Paul Bourke, 2001).
Last edited by Wayward; 10-29-2004 at 07:04 AM.
This may have been 4-5 years ago...
Originally Posted by MirekCz
Hm, some of this is helpful, thanks guys. I'll see what I can come up with.