Rusty Image is a simple image resizer, format converter and analysis app written in rust compiled to wasm running from github pages and developed using ChatGPT. It even supports offline mode and can be installed as a standalone PWA on your device.
- Feel free to give it a try here: https://rustyimagetools.github.io
- Source: https://github.com/RustyImageTools/RustyImageTools.github.io
In this post, lets take a look at how I built the rust logic.
Breaking down the lib.rs file
The core file powering RustyImageTools comes from the lib.rs file found here. This file serves as the entry point for the rust logic. Though the rust app is later converted to a wasm file, the core logic was written in this file.
Starting at the imports
use exif::{In, Reader, Tag};
use image::{imageops::FilterType, DynamicImage, GenericImageView, ImageFormat, Pixel, Rgb};
use std::{collections::HashMap, fmt::Write, io::Cursor};
use wasm_bindgen::prelude::*;
use serde_wasm_bindgen::to_value;
use serde::Serialize;
EXIF Metadata Handling
use exif::{In, Reader, Tag};
The exif
library is used to handle EXIF (Exchangeable Image File Format) metadata, which stores information about an image, such as camera settings, date taken, and orientation.
-
In
Specifies where to look for the metadata within the image file. -
Reader
Reads and parses the EXIF metadata from an image. -
Tag
Represents the various tags or fields within the EXIF metadata, like DateTime, Orientation, and more.
Image Processing
use image::{imageops::FilterType, DynamicImage, GenericImageView, ImageFormat, Pixel, Rgb};
-
image
A popular library in Rust for handling images, which allows for loading, manipulating, and saving images in different formats. -
imageops::FilterType
Defines various image scaling filters, such as Nearest, Lanczos3, and Gaussian, used to control the quality of resizing operations. -
DynamicImage
A versatile enum representing images that can be of multiple formats (JPEG, PNG, etc.), allowing operations without knowing the specific format in advance. -
GenericImageView
A trait that provides methods for querying image properties, such as width and height. -
ImageFormat
Enum representing different image file formats (e.g., JPEG, PNG) to help when saving or loading images. -
Pixel
Trait for pixel manipulation, enabling low-level control over pixel data. -
Rgb
Represents RGB pixel data, useful for accessing and modifying color data within images.
Standard Library Imports
use std::{collections::HashMap, fmt::Write, io::Cursor};
-
collections::HashMap
A key-value storage type, used for storing data in a way that allows efficient retrieval, common in settings where tags or parameters are associated with specific values. -
fmt::Write
Allows for formatted output, commonly used when generating strings dynamically (e.g., building metadata outputs or formatted logs). -
io::Cursor
Allows in-memory buffers to be used as if they were file-like objects, helpful for working with data streams, such as when decoding or encoding images in memory without actual files.
WebAssembly and Serialization
use wasm_bindgen::prelude::*;
use serde_wasm_bindgen::to_value;
use serde::Serialize;
-
wasm_bindgen
A library for integrating Rust with WebAssembly, making Rust functions accessible in JavaScript when compiling for the web. -
serde_wasm_bindgen::to_value
Converts Rust data structures into JavaScript-compatible values, essential for passing complex data to JavaScript in WebAssembly. -
serde::Serialize
A trait to make Rust structs serializable, allowing them to be converted into formats like JSON, often needed when passing structured data between Rust and JavaScript.
Defining a struct for Image Metadata and Color Analysis
When analyzing images, we want to capture metadata details and color patterns. I define a struct called ImageAnalysis
to store these values
#[derive(Serialize)]
struct ImageAnalysis {
exif_data: Vec<[String; 2]>,
unique_colors: Vec<String>,
}
By adding the #[derive(Serialize)]
attribute, we’re enabling the struct to be converted (serialized) into formats like JSON. This is especially useful for web applications or any system where the data might be passed to JavaScript or some other API.
Next we capturing the image metadata in this key exif_data: Vec<[String; 2]>
This vector (or list) is used as key-value pairs, each represented as a two-element array of strings. Each entry holds an EXIF tag name and its value.
The unique_colors: Vec<String>
field is another vector, this time holding strings that represent color values. Each color string could be in a format like “#RRGGBB” to represent RGB colors uniquely present in the image.
Reading Image Orientation
/// Reads the orientation from the image EXIF data.
/// Returns the orientation as a `u16`. Defaults to `1` if not found.
fn read_orientation(image_data: &[u8]) -> u16 {
// Wrap the byte slice in a Cursor, which implements BufRead and Seek
let cursor: Cursor<&[u8]> = Cursor::new(image_data);
// Create a new Reader without needing to pass any arguments
let reader: Reader = Reader::new();
// Attempt to read the EXIF data from the image
match reader.read_from_container(&mut cursor.clone()) {
Ok(exif) => {
// Attempt to find the orientation tag in the primary IFD
if let Some(field) = exif.get_field(Tag::Orientation, In::PRIMARY) {
// If found, return its value as a `u16`
match field.value.get_uint(0) {
Some(val) => val as u16,
None => 1, // Default orientation if the tag value is not readable
}
} else {
// Default orientation if the tag is not found
1
}
}
Err(_) => {
// Default orientation if EXIF data cannot be read
1
}
}
}
The read_orientation
function extracts the orientation data from an image’s EXIF metadata. This metadata tells us if an image is rotated or flipped, helping us apply it back correctly after resizing an image
This function receives image_data
as a byte slice &[u8]
, which represents the raw binary data of the image. It returns a u16
, representing the orientation value found in the EXIF data. The default orientation (1) indicates no rotation or flipping, assuming that the image is already correctly oriented if no EXIF data exists.
// Wrap the byte slice in a Cursor, which implements BufRead and Seek
let cursor: Cursor<&[u8]> = Cursor::new(image_data);
The function starts by wrapping the image_data
in a Cursor. This allows the byte slice to behave like a file, implementing both the BufRead and Seek traits. Using Cursor enables the Reader to read through the image data as though it were reading from a file, which is necessary for working with EXIF data stored in binary form.
// Create a new Reader without needing to pass any arguments
let reader: Reader = Reader::new();
Next, the function initializes a new Reader, which will handle the EXIF data parsing. This reader will be responsible for locating and extracting metadata fields from the image, including orientation.
// Attempt to read the EXIF data from the image
match reader.read_from_container(&mut cursor.clone()) {
This line attempts to read EXIF data by calling read_from_container
. This method inspects the image data within the cursor, looking for EXIF headers and metadata tags. Using match here allows the function to handle any potential errors, such as missing or malformed EXIF data, gracefully.
// Attempt to find the orientation tag in the primary IFD
if let Some(field) = exif.get_field(Tag::Orientation, In::PRIMARY) {
// If found, return its value as a `u16`
match field.value.get_uint(0) {
Some(val) => val as u16,
None => 1, // Default orientation if the tag value is not readable
}
} else {
// Default orientation if the tag is not found
1
}
Within the successful result of read_from_container
, the function checks for the Orientation tag in the primary Image File Directory (IFD). If this tag exists, the function retrieves its value as an unsigned integer u16
.
The use of get_uint(0)
means that it attempts to read the first (and only) value of the orientation tag.
If either the tag or its value isn’t available, the function defaults to returning 1, which means “no rotation”
Err(_) => {
// Default orientation if EXIF data cannot be read
1
}
If there is an error we just simply default to 1 again.
Applying The Orientation
/// Applies the appropriate transformation to the image based on its EXIF orientation.
fn apply_orientation(mut img: DynamicImage, orientation: u16) -> DynamicImage {
match orientation {
1 => img, // Normal, no action needed
2 => img.fliph(), // Flipped horizontally
3 => img.rotate180(), // Rotated 180 degrees
4 => img.flipv(), // Flipped vertically
5 => {
// Transposed: flipped horizontally then rotated 90 degrees CCW
img = img.fliph();
img.rotate270()
}
6 => img.rotate90(), // Rotated 90 degrees CW
7 => {
// Transverse: flipped horizontally then rotated 90 degrees CW
img = img.fliph();
img.rotate90()
}
8 => img.rotate270(), // Rotated 90 degrees CCW
_ => img, // Default case, no transformation
}
}
In this function, we are taking an image img
and applying a transformation based on its orientation that we got from the prior function.
-
img: DynamicImage
The img argument represents an image in memory, of type DynamicImage. This type, from the image crate, is a flexible image container that can handle various formats like RGB, grayscale, and more. By passing img asmut
, the function can modify it in place, transforming it based on the orientation metadata. -
orientation: u16
The orientation argument is a 16-bit unsigned integer representing the orientation value embedded in the image metadata between 1 and 8. Each number from 1 to 8 defines a specific transformation (e.g., flipping or rotating the image).
We then use the integer and match it applying flip and or rotation based on it to the img
provided and return it.
Converting RGB to HSB
// A simple function to convert RGB to HSB
fn rgb_to_hsb(rgb: Rgb<u8>) -> (f32, f32, f32) {
let r = rgb[0] as f32 / 255.0;
let g = rgb[1] as f32 / 255.0;
let b = rgb[2] as f32 / 255.0;
let max = r.max(g.max(b));
let min = r.min(g.min(b));
let delta = max - min;
let hue = if delta == 0.0 {
0.0
} else if max == r {
60.0 * (((g - b) / delta) % 6.0)
} else if max == g {
60.0 * (((b - r) / delta) + 2.0)
} else {
60.0 * (((r - g) / delta) + 4.0)
};
let saturation = if max == 0.0 { 0.0 } else { delta / max };
(hue, saturation, max)
}
This rgb_to_hsb
, is designed to convert a color represented in RGB (Red, Green, Blue) format to HSB (Hue, Saturation, Brightness). I do this because it aids in the other functions later that help find unique colors which is a little harder to do in rgb, so I convert it to hsb.
This function takes a single argument, rgb
, of type Rgb<u8>
. This type represents an RGB color where each color channel Red, Green, and Blue is an unsigned 8-bit integer (ranging from 0 to 255). The function returns a tuple (f32, f32, f32)
, representing the Hue, Saturation, and Brightness channels as floating-point values.
The first step in converting RGB to HSB is to normalize each RGB channel to a range of 0.0 to 1.0. This makes the calculations easier, as the formulae for HSB use normalized values:
let r = rgb[0] as f32 / 255.0;
let g = rgb[1] as f32 / 255.0;
let b = rgb[2] as f32 / 255.0;
In the HSB model, brightness is simply the maximum of the three normalized RGB channels. The max function here determines which of r, g, or b has the highest value:
let max = r.max(g.max(b));
Similarly, to calculate the “delta,” or difference between the maximum and minimum channel values, we use min. This delta will be essential for determining hue and saturation:
let min = r.min(g.min(b));
let delta = max - min;
The hue calculation relies on understanding which of the RGB channels is the largest (max), which determines the “base” color. Each color’s hue range falls within one of the primary colors’ segments:
- Red max: Hue is calculated with a formula that factors in the green and blue differences relative to red.
- Green max: Hue is shifted by +2.0 to adjust to green’s segment.
- Blue max: Hue is further shifted by +4.0 to represent blue’s position.
These formula segments create a full hue range from 0 to 360 degrees:
let hue = if delta == 0.0 {
0.0 // Gray or black, so hue is undefined
} else if max == r {
60.0 * (((g - b) / delta) % 6.0)
} else if max == g {
60.0 * (((b - r) / delta) + 2.0)
} else {
60.0 * (((r - g) / delta) + 4.0)
};
Saturation measures color intensity, calculated as the ratio of delta to max. When max is 0.0 (black), saturation is 0.0. Otherwise, it is determined as delta / max:
let saturation = if max == 0.0 { 0.0 } else { delta / max };
Finally, the function returns the (hue, saturation, max), where max serves as brightness. Each value in the tuple represents a color channels in HSB:
(hue, saturation, max)
Comparing HSB colors
// Function to calculate difference in hue, saturation and brightness
fn hsb_diff(hsb1: (f32, f32, f32), hsb2: (f32, f32, f32)) -> (f32, f32, f32) {
let hue_diff = (hsb1.0 - hsb2.0).abs();
let saturation_diff = (hsb1.1 - hsb2.1).abs();
let brightness_diff = (hsb1.2 - hsb2.2).abs();
(saturation_diff, brightness_diff, hue_diff)
}
The hsb_diff
, calculates the difference between two colors in the HSB color model. By finding the absolute difference in each of the HSB channels, it provides a way to measure how “different” two colors are in terms of hue, saturation, and brightness. This can be especially useful for tasks such as comparing colors in an image or determining color similarity for filtering or clustering applications.
The function takes two arguments, hsb1
and hsb2
, which are tuples representing colors in the HSB format. Each tuple contains three f32
floating-point values corresponding to the hue, saturation, and brightness channels of a color.
The first calculation focuses on the hue channels. Hue represents the color’s “angle” on the color wheel (often in degrees, from 0 to 360). To find the difference, we subtract the hue of hsb2
from hsb1
and take the absolute value, ensuring that the result is non-negative. This provides a straightforward measure of how similar or different the colors’ hues are:
let hue_diff = (hsb1.0 - hsb2.0).abs();
Saturation measures the color’s intensity or vibrancy, ranging from 0 (gray) to 1 (full saturation). Similar to the hue difference, the saturation difference is computed by subtracting hsb2
‘s saturation from hsb1
‘s saturation and taking the absolute value. This tells us how different the two colors are in terms of vibrancy:
let saturation_diff = (hsb1.1 - hsb2.1).abs();
Brightness measures the lightness of a color, from 0 (black) to 1 (full brightness). The brightness difference is computed similarly to hue and saturation, using the absolute difference between hsb1
and hsb2
. This difference provides an idea of how much lighter or darker one color is relative to the other:
let brightness_diff = (hsb1.2 - hsb2.2).abs();
Finally, the function returns a tuple containing the saturation, brightness, and hue differences in that order. By ordering the channels (saturation_diff, brightness_diff, hue_diff), the result aligns with how the differences are prioritized. (e.g., intensity and brightness can be more visually impactful than hue alone):
(saturation_diff, brightness_diff, hue_diff)
Looking For Unique Colors
fn get_unique_colors(image_data: &[u8]) -> Vec<String> {
let img: DynamicImage = image::load_from_memory(image_data).expect("Failed to load image");
let mut color_count: HashMap<[u8; 3], u32> = HashMap::new();
for (_, _, pixel) in img.pixels() {
let rgb = pixel.to_rgb().0;
*color_count.entry(rgb).or_insert(0) += 1;
}
let all_colors = color_count.keys().collect::<Vec<_>>();
let mut unique_colors = Vec::new();
for &color in all_colors {
let color_hsb: (f32, f32, f32) = rgb_to_hsb(Rgb(color)); // Corrected this line
if unique_colors.iter().all(|&unique| {
let (sat_diff, bri_diff, hue_diff) = hsb_diff(color_hsb, rgb_to_hsb(Rgb(unique)));
sat_diff > 0.1 && bri_diff > 0.1 && hue_diff > 10.0 // Adjust thresholds as needed
}) {
unique_colors.push(color);
if unique_colors.len() >= 20 {
break;
} // Limit to 5 unique colors
}
}
let mut results = Vec::new();
// Convert channel data to hex
for color in unique_colors {
let mut hex_color = String::new();
write!(
&mut hex_color,
"#{:02X}{:02X}{:02X}",
color[0], color[1], color[2]
)
.unwrap();
results.push(hex_color);
}
results
}
The get_unique_colors
takes in raw image data and returns a vector of unique colors found in the image. These colors are filtered for uniqueness based on their HSB values, and then converted into hexadecimal strings for easy use.
fn get_unique_colors(image_data: &[u8]) -> Vec<String>
We start by taking a reference to an array of bytes image_data: &[u8]
representing the raw image data.
Then we state it will return a vector of strings, each representing a unique color in hex format (e.g., “#FF5733”).
The function begins by loading the image from image_data, using the image::load_from_memory
function. This attempts to interpret the raw byte data and convert it into a DynamicImage
:
let img: DynamicImage = image::load_from_memory(image_data).expect("Failed to load image");
Next, a HashMap
is created to count occurrences of each unique color in RGB format [u8; 3]
, where each key represents an RGB color, and the value tracks how many times it appears:
let mut color_count: HashMap<[u8; 3], u32> = HashMap::new();
The function then iterates over each pixel in the image, using the img.pixels()
iterator:
for (_, _, pixel) in img.pixels() {
let rgb = pixel.to_rgb().0;
*color_count.entry(rgb).or_insert(0) += 1;
}
Each pixel’s RGB value is extracted and counted. entry(rgb).or_insert(0)
ensures that each color starts with a count of zero if it isn’t already in the map. The counter is then incremented.
After counting colors, all_colors collects the unique colors (keys) from color_count into a vector:
let all_colors = color_count.keys().collect::<Vec<_>>();
The unique_colors vector will store colors filtered for uniqueness based on their HSB values:
let mut unique_colors = Vec::new();
The loop iterates over each color, converting it to HSB with rgb_to_hsb
as we seen above, and checks if it’s sufficiently different from colors already in unique_colors. The differences are measured by hsb_diff
, which we seen above that calculates the difference in hue, saturation, and brightness:
for &color in all_colors {
let color_hsb: (f32, f32, f32) = rgb_to_hsb(Rgb(color));
if unique_colors.iter().all(|&unique| {
let (sat_diff, bri_diff, hue_diff) = hsb_diff(color_hsb, rgb_to_hsb(Rgb(unique)));
sat_diff > 0.1 && bri_diff > 0.1 && hue_diff > 10.0 // Adjust thresholds as needed
}) {
unique_colors.push(color);
if unique_colors.len() >= 20 {
break;
}
}
}
Here, the hsb_diff
function ensures that only colors with significant differences (as defined by the thresholds) are added to unique_colors. The break limits this vector to 20 colors to avoid overly large results.
The final step is to convert each RGB color in unique_colors
into a hexadecimal string. Using the write!
macro, each color’s RGB channel are formatted as a hex string and pushed to results:
for color in unique_colors {
let mut hex_color = String::new();
write!(
&mut hex_color,
"#{:02X}{:02X}{:02X}",
color[0], color[1], color[2]
)
.unwrap();
results.push(hex_color);
}
Each hex color is formatted as #RRGGBB
, and any write!
errors are handled with unwrap()
since they are unlikely in this context.
The function returns results, containing a vector of unique colors in hex format:
results
With that we end up with a vector of 20 unique colors based on our image provided.
Parsing Metadata from EXIF tags
fn parse_exif_data(image_data: &[u8]) -> Vec<[String; 2]> {
// Initialize an empty vector to hold our EXIF tags as strings
let mut exif_tags: Vec<[String; 2]> = Vec::new();
// Create a cursor around the image data
let cursor: Cursor<&[u8]> = Cursor::new(image_data);
// Attempt to read the EXIF data using the exif crate
match Reader::new().read_from_container(&mut cursor.clone()) {
Ok(exif) => {
for field in exif.fields() {
// Create an array for each EXIF field with the tag name and its display value
let tag_name: String = field.tag.to_string();
let tag_value: String = field.display_value().with_unit(&exif).to_string();
let tag_pair: [String; 2] = [tag_name, tag_value];
// Push the array into our vector
exif_tags.push(tag_pair);
}
},
Err(e) => {
// If there's an error reading the EXIF data, push the error message to the tags vector
exif_tags.push(["Failed to read EXIF data".to_string(), e.to_string()]);
}
}
// Convert our vector of strings to a JsValue to pass back to JavaScript
// to_value(&exif_tags).unwrap_or(JsValue::NULL)
exif_tags
}
The parse_exif_data
function reads and extracts EXIF metadata from an image, returning it as a vector of string pairs. Each pair consists of an EXIF tag name and its value, allowing users to retrieve detailed metadata from an image file, such as camera settings, date taken, and location (if available).
fn parse_exif_data(image_data: &[u8]) -> Vec<[String; 2]>
The function starts by taken a reference to a byte slice &[u8]
representing the raw image data.
Then the function states it returns a vector of string arrays Vec<[String; 2]>
, where each array represents an EXIF tag in the format [tag_name, tag_value].
let mut exif_tags: Vec<[String; 2]> = Vec::new();
The function begins by creating an empty vector, exif_tags
, which will store the extracted EXIF metadata. Each item in this vector is a two-element array of String values, where:
- The first element is the tag’s name.
- The second element is the tag’s display value:
Next, a Cursor is created around image_data. A Cursor allows byte-based access to data, which is necessary for working with EXIF metadata stored in a binary format:
let cursor: Cursor<&[u8]> = Cursor::new(image_data);
Then next it reads EXIF metadata using the Reader
from the exif
crate. The read_from_container
method takes a mutable reference to the cursor and attempts to parse the EXIF fields:
match Reader::new().read_from_container(&mut cursor.clone()) {
Ok(exif) => { /*...*/ },
Err(e) => { /*...*/ }
}
Here, match is used to handle both successful and unsuccessful outcomes:
-
Success
Ok
: If EXIF data is successfully read, it processes each metadata field. -
Error
Err
: If there’s an error (e.g., the image has no EXIF data or it’s corrupted), it records an error message.
In the case of successful EXIF data reading, the function iterates over exif.fields()
, which contains each EXIF tag in the metadata. For each field:
- Tag Name: The tag name is converted to a string with to_string().
-
Tag Value: The display value of the tag, converted to a string, provides a readable version of the metadata (e.g., ISO values, shutter speeds). The
display_value().with_unit(&exif)
method ensures that units (if applicable) are included
let tag_name: String = field.tag.to_string();
let tag_value: String = field.display_value().with_unit(&exif).to_string();
let tag_pair: [String; 2] = [tag_name, tag_value];
This pair of strings is then added to exif_tags
exif_tags.push(tag_pair);
If there is an error reading the EXIF data (e.g., the data is missing or unreadable), the error is captured, and a descriptive message is stored in exif_tags
. The error is converted to a string so users can see why the EXIF data couldn’t be read:
Err(e) => {
exif_tags.push(["Failed to read EXIF data".to_string(), e.to_string()]);
}
Finally, exif_tags, containing either EXIF tag pairs or an error message, is returned via
exif_tags
Creating A WASM function to analyze the image data
#[wasm_bindgen]
pub fn analyze_image(image_data: &[u8]) -> JsValue {
let exif_data: Vec<[String; 2]> = parse_exif_data(image_data);
let unique_colors: Vec<String> = get_unique_colors(image_data);
let analysis: ImageAnalysis = ImageAnalysis {
exif_data,
unique_colors,
};
// Convert the combined data into a JsValue
to_value(&analysis).unwrap_or(JsValue::UNDEFINED)
}
The analyze_image
function is designed to analyze an image by extracting both its EXIF metadata and unique colors, then combining this data into a structured result that can be returned to JavaScript in a WebAssembly. This function leverages Rust’s wasm_bindgen
crate to facilitate interoperability between Rust and JavaScript, making it possible for processing images directly in the browser.
#[wasm_bindgen]
pub fn analyze_image(image_data: &[u8]) -> JsValue
- We first uses the #[wasm_bindgen] attribute, exposing it to JavaScript when compiled to WebAssembly.
-
Then the function accepts a reference to a byte slice
&[u8]
representing the raw image data. -
Then the function states it returns a
JsValue
, which is JavaScript-compatible type used by WebAssembly to pass data back to JavaScript.
The function begins by calling parse_exif_data(image_data)
as seen above.
let exif_data: Vec<[String; 2]> = parse_exif_data(image_data);
Next, the function calls get_unique_colors(image_data)
to analyze the image’s colors again as we covered above.
let unique_colors: Vec<String> = get_unique_colors(image_data);
After gathering the EXIF metadata and unique colors, the function combines them into an ImageAnalysis
struct.
let analysis: ImageAnalysis = ImageAnalysis {
exif_data,
unique_colors,
};
The analysis struct is then converted into a JsValue
using to_value, which serializes the Rust struct into JSON-compatible format. If there’s an issue during serialization, unwrap_or(JsValue::UNDEFINED)
ensures that JsValue::UNDEFINED
is returned instead of an error:
to_value(&analysis).unwrap_or(JsValue::UNDEFINED)
Creating a function to resize a given image
#[wasm_bindgen]
pub fn resize_image(
image_data: &[u8],
width: u32,
height: u32,
format: &str,
filter: &str,
) -> Vec<u8> {
let img: DynamicImage = image::load_from_memory(image_data).unwrap();
let orientation: u16 = read_orientation(image_data);
let mut img: DynamicImage = apply_orientation(img, orientation);
// Ensure the image is in a color space compatible with the target format.
if format == "jpeg" {
img = DynamicImage::ImageRgb8(img.to_rgb8());
}
let filter_type: FilterType = match filter {
"catmull_rom" => FilterType::CatmullRom,
"gaussian" => FilterType::Gaussian,
"lanczos3" => FilterType::Lanczos3,
"nearest" => FilterType::Nearest,
"triangle" => FilterType::Triangle,
_ => FilterType::Triangle, // Default filter
};
let resized: DynamicImage = img.resize_to_fill(width, height, filter_type);
let image_format: ImageFormat = match format {
"png" => ImageFormat::Png,
"webp" => ImageFormat::WebP,
"jpeg" => ImageFormat::Jpeg,
"avif" => ImageFormat::Avif,
"bmp" => ImageFormat::Bmp,
"gif" => ImageFormat::Gif,
"tiff" => ImageFormat::Tiff,
"ico" => ImageFormat::Ico,
_ => ImageFormat::Png, // Default format
};
let mut result: Vec<u8> = Vec::new();
{
let mut cursor: Cursor<&mut Vec<u8>> = Cursor::new(&mut result);
resized.write_to(&mut cursor, image_format).unwrap();
}
result
}
The resize_image
function resizes an image to specified dimensions and formats it for output in the browser via WebAssembly. This function allows users to set dimensions, specify an output format, and choose a resizing filter.
#[wasm_bindgen]
pub fn resize_image(
image_data: &[u8],
width: u32,
height: u32,
format: &str,
filter: &str,
) -> Vec<u8>
As we seen above we use the #[wasm_bindgen] attribute, exposing it to JavaScript in WebAssembly.
The function accepts raw image data as a byte slice &[u8]
, target width and height as u32
, output format as a &str
, and filter type as a &str
.
The function then states it will return a Vec<u8>
containing the resized image data in the specified format.
The function first loads the image from image_data using image::load_from_memory
, which tries to interpret the byte data as an image. This results in a DynamicImage
object:
let img: DynamicImage = image::load_from_memory(image_data).unwrap();
To ensure the image displays correctly, resize_image
reads the orientation metadata and applies any necessary rotation or flip using apply_orientation. We seen this function at the begining of this post.
let orientation: u16 = read_orientation(image_data);
let mut img: DynamicImage = apply_orientation(img, orientation);
If the output format is JPEG, the image is converted to the RGB8 color space to ensure compatibility with JPEG’s color requirements. This is essential because some formats, like PNG, can support more color channels, which JPEG does not:
if format == "jpeg" {
img = DynamicImage::ImageRgb8(img.to_rgb8());
}
The function matches the filter argument to a FilterType variant, determining the resizing algorithm:
-
CatmullRom
Produces smooth, high-quality images. -
Gaussian
Useful for images requiring slight blurring. -
Lanczos3
High-quality filter, especially effective for downscaling. -
Nearest
Quick but lower quality, using nearest-neighbor interpolation. -
Triangle
Simple linear interpolation, balancing quality and speed.
If an unsupported filter is specified, FilterType::Triangle
is used as the default:
let filter_type: FilterType = match filter {
"catmull_rom" => FilterType::CatmullRom,
"gaussian" => FilterType::Gaussian,
"lanczos3" => FilterType::Lanczos3,
"nearest" => FilterType::Nearest,
"triangle" => FilterType::Triangle,
_ => FilterType::Triangle,
};
Using the resize_to_fill method
, the function resizes img to the target width and height, applying the chosen filter type. resize_to_fill crops the image to fit the specified dimensions, filling the entire area without distortion:
let resized: DynamicImage = img.resize_to_fill(width, height, filter_type);
The output format is determined by matching format to an ImageFormat
variant. This supports various common formats (e.g., PNG, JPEG, GIF). If the format is not recognized, PNG is used as the default:
let image_format: ImageFormat = match format {
"png" => ImageFormat::Png,
"webp" => ImageFormat::WebP,
"jpeg" => ImageFormat::Jpeg,
"avif" => ImageFormat::Avif,
"bmp" => ImageFormat::Bmp,
"gif" => ImageFormat::Gif,
"tiff" => ImageFormat::Tiff,
"ico" => ImageFormat::Ico,
_ => ImageFormat::Png,
};
The resized image is written to a Vec<u8>
(the result variable) using a Cursor for in-memory writing. The chosen image_format determines how the image data is encoded. The unwrap handles any potential errors during writing:
let mut result: Vec<u8> = Vec::new();
{
let mut cursor: Cursor<&mut Vec<u8>> = Cursor::new(&mut result);
resized.write_to(&mut cursor, image_format).unwrap();
}
Finally, the result
containing the resized image data in the requested format is returned, ready to be consumed by JavaScript:
Final thoughts.
This post is already big as it is. So I plan on writing up a separate post for how I write the JS to interact with this logic and also how I convert this rust application to wasm. Thanks for sticking it out this far and making it to the end. 🎉
I personally learned a great deal while building this Rust app. I even ported this app to this very blog so that I can create a scaled and thumbnail versions of images before I upload them to a object store via a pre-signed URLs. I will write a post about that later. But for now what it means is I no longer have use a server to upload and resize an images. Which is a huge win for me.