
2026-02-13
Satellite or Space Debris tracking using computer vision
Image processing techniques on how to detect stars and moving objects from camera images
If you ever look at a point light source through a camera you might expect it will be a ideal point in the image but interestingly it is not. The point objects in the space are ‘not Points’ but a time integrated measurement of photon flux (number of photons passing through unit area in a unit time) on a discrete sensor grid due to property of camera sensor optics, atmosphere diffraction and motion blur. So Point Spread Function(PSF) describes the response of a focused optical imaging system to an idealized point source of light. For a given camera it is a blurry image of a blob. The final image an overlapping of PSFs of infinitely small points shifted, scaled and intensity of the point. It is described by convolution operations PSF of camera and the image.
For tracking the motion of space object we need to calculate the approx centre of the image object as it is not an ideal demonstration. That is why instead of finding the centre of the pixels we find out averaged (to avoid noise due to atmosphere, motion blur or non-theoretical distortions in sensor) image Pixel Intensity I(x, y). For measuring centroid for streak object the position will be integral of the pixel locations observations over time.
In the Image some Objects appears as stars and others streak because of relative angular velocity of the object w.r.t the camera as both earth has angular motion and object will also have. If the camera has exposure time T -> the object moves significantly relative to the camera during time T it will create streak other will produce a dot. ω=dt/dθ => dθ = ω.dt, displacement on the image plane during exposure dt
If the camera is tracking stars it will show satellites as streak as they move relative to star frame of reference (1–2 degrees per second of satellite movement will create 3600–7200 arcsec/s angular velocity and satellite will move many pixels seemed as bright line) and it is true vise-versa to show stars as streak due to choice of reference frame of the camera as earth rotation.
Denoising without destroying the signal
The goal for denoising operation is to supress high-frequency noise, isolated hot pixels and minor background fluctuations while preserving elongated low contrast structures(faint streaks). A naive Gaussian blur doesnot work as it reduces peak SNR of faint streaks instead denoising is used as variance reduction for signal preservation.
So we want to denoise means background becomes smooth (low variance), Stars remains compact (PSF is not smeared), Streaks remain continuous (not dotted) and Hot Pixels/outliers removed without deleting faint streak pixels
Remove Hot pixels using median filter
Median filter is ideal here cause it dampens single-pixel spike but keeps edges/lines better than Gaussian Blur. Each pixel is replace by the median of its local neighbourhood. If an isolated hot pixel becomes an extreme outlier median rejects it. Kernel size should be kept 3x3 or 5x5 if selected larger kernel it remove faint streaks.
Reduce fine grain using Non Local Means(NML)
NLM Averages pixel that have similar neighbourhoods, even if they are far apart. It reduces stochastic noise strongly while keeping coherent structures more intact than Gaussian Blur. Patch size controls how local patterns are compared and search window finds how far it looks for similar patches.
Enhance the denoised image to improve SNR
Denoising reduces the variance for noise but faint streaks can still be close to background. So we need to boost the detectibility of the signal. We want increased seperation between signal statistics and background statistics.
1. Remove low-frequency background (flatten)
SSA frame have often gradients like glow or vinetting if not removed any global thresholding will fail. We have to estivate background B(x,y) by a method that will ignore small bright features like Rolling ball Background subtraction. Which makes background zero mean and consistent. Then subtract the background from the denoised image.
2. Matched filtering for enhancement of streaks
A faint streak is like a line like template. The best way to improve SNR is to correlate with something like a line. Using a multi-orientation line filter we convolve image with a thin line kernel at multiple angles. At correct angle streak pixels add constructively then background averages out. Output we get is a streakness response map. Then a morphological top-hat enhancement is done to amplify bright small and elongated features.
Thresholding to Enhance image to Isolate Bright pixels
We will now threshold the image not on original image intensity but where in image background is flatter and streaks/stars are amplified. Good way to thresholding is to estimate background mean or median and dispersion from the enhanced image. set threshold = mean of enhanced image + k * std of enhanced image.
Detect Stars and Streaks and compute subpixel centroids
In this step we label connected region in binary mask then for each region we compute shape properties. A classical way is to find covariance of pixel coordinates weighted by pixel intensity.
Eignevalues 𝜆1>=𝜆2 give elongation. Stars elongation near 1 roughly circular and streaks large elongation.
To find subpixel centroids we do intesity weighted centroid due to PSF distribution.
Full post: Medium