Contact us!
Contact Form

HySpex Contact Form

Thank you for your interest in HySpex. Please complete the form below and we will get back to you as soon as possible.

Model(s) of interest *
Application(s) of interest *
The data submitted through this form is handled according to NEO's privacy statement. The privacy statment can be reviewed here.

* These fields are required.

Scientific grade quality
Keystone
Data Quality
Why Nyquist

Scientific grade hyperspectral data from UAV platforms

Hyperspectral imaging from UAVs has in some fields been discredited as poorly performing systems have been used, and general conclusions on the quality of hyperspectral imaging from UAVs have been drawn on this basis. The following will explain how and why scientific grade hyperspectral data acquisition from UAVs is indeed possible today, and that HySpex Molnir is currently the only truly scientific grade hyperspectral imaging solution for UAV platforms.

Obtaining scientific grade data from a UAV platform will require:

  • High-end hyperspectral camera, fetauring:
    • Sharp optics and low distortions
    • Low stray light
    • Low F# optics (high throughput)
    • Detector with high speed and low noise floor
    • Good SNR across the full spectral range
  • Stable and traceable calibration
  • Stable Platform (Roll, pitch heading stabilization)
  • High performance INS and good boresight calibration for direct georeferencing
  • Operational parameters

Many of these points are impossible to compare by simply comparing data sheets and brochures and requires a detailed investigation of real data, test reports or attending a live demo to make a relevant comparison of data quality and operational readiness.

High-end hyperspectral camera

Mjolnir V-1240 operated with the Camflight UAV.

It can be difficult to distinguish a high from a low quality solutions based only on standard marketing material. The following, will elaborate on some quality parameters for hyperspectral cameras that are important to be aware of when choosing a sensor for a given application. Some points are generic for any hyperspectral camera, while others are specific for UAV applications.

Sharp optics and low distortions

HySpex Mjolnir offer extremely sharp optics per pixel and per spectral band for both the VNIR and SWIR spectral range. The V-1240 has 1240 spatial pixels and 400 bands, and the spatial resolution (FWHM) is as low as 1.1 times the spatial sampling. Spectrally, the resolution (FWHM) at 400 bands is 1.5 times the spectral sampling across the whole FOV and spectral range, yielding extremely sharp images. With these specifications, the cameras have less than 10% of a band smile and less than 10% of a pixel keystone.
The S-620 has 620 spatial pixels and 300 spectral bands in the SWIR spectral range. The sharpness, keystone and smile effect is comparable or even better than the V-1240. The keystone and smile specifications of a hyperspectral system are some of the most important specifications of a hyperspectral camera. At the same time, they are among the parameters that get overlooked most frequently in a selection phase.


Keystone in a data set will generate unphysical spectra.


A camera, specified to have 1240 pixels should be expected to be able to record 1240 unique and correct spectral signatures – reproducing the real radiance from each pixel in the scene.
Some suppliers offer solutions specified to have 50% or more keystone. A camera with 50% keystone, will acquire data where 50% of the spectral signature in a given pixel originates from the pixel next to it. This is different from the linear mixture of two spectral signatures recorded by a pixel containing two different objects.
The keystone effect in the data will have some part of the spectral signature coming from one of the objects, while other parts of the spectra have a 50% influence from the object next to it – in essence generating an unphysical spectral signature. This effect will effectively reduce the spatial resolution, and larger objects are required to to get one pure pixel. Keystone will also introduce effects like mis-identification/classification or false alarms in your data product.

Stray Light

Another important limitation that some sensors suffer from is straylight – especially in the blue region. A sensor having a lot of straylight, can often be identified in the data as a blueish haze in the images, together with incorrect radiance and reflectance values. So again, ask for test report and sample data before selecting a sensor.

Low F# optics

The more spatial pixels and spectral bands a camera has, the less light reaches each pixel/band. Thus, a high resolution hyperspectral imaging system should be very light sensitive. For UAV employment, and especially for fixed wing UAV platforms, the camera is moving at a very high speed relative to the flying altitude. This implies that the camera should be run very fast – resulting in a low integration time (exposure time). Hence, a low F# is important for the camera to ensure as much light on each pixel as possible, resulting in an SNR as high as possible for the operational requirements.


Mjolnir V-1240 has F1.8 optics and Mjolnir S-620 has F1.9.


A low F# requires both larger optics and mechanics, as well as more complex optics to maintain sharp images with low distortions. At the same time, it is of great importance for UAV employments that the camera should be small. The Mjolnir design has focused on reaching the best trade-off between data quality and size/weight, with the main emphasis on maintaining true scientific grade data quality.

Detector with high speed and low noise floor

A small hyperspectral systems will smaller pixel pitch on the detector, than larger camera. This implies a smaller full well (charge capacity) on the detector pixels. In combination with having a high light throughput in the optics, it is also important to have a very low noise floor (read noise) on the detector. Lower noise floor means higher dynamic range in the images.
As an example, the Mjolnir V-1240 has 2.3 electrons noise floor. Assume this system is mounted on a fixed wing UAV flying at low altitude (i.e. short integration time), with a low sun angle yielding very challenging conditions with a lot of shadows. In the aquired data, the signal acquired from the shaded scene generates 24 photoelctrons in one pixel and band. Mjolnir V-1240, with the 2.3 electrons noise floor, will have an SNR of 10.43 for this signal. A noisier system, however, with e.g. 13 electrons read noise, will only have an SNR of 1.85, if all other parameters are identical.

Good SNR across the full spectral range

A smaller pixel pitch (to scale down the system size) means smaller full well, compared to systems with larger pitch. A smaller full well will give a smaller peak SNR. Peak SNR is approximately the square root of the full well if the noise floor is small. The full well of the detector used for V-1240 is about 10000 electrons, yielding a peak SNR of approximately 100 in the native sensor resolution. To get higher SNR, the dispersion in the camera covers more bands than the output cube. This is a good solution if the read noise is sufficiently low. For Mjolnir V-1240, 3.24 pixels are binned together, which effectively increase the full well with a factor 3.24. The peak SNR is thus about 180 at the full output resolution (1240 pixels x 400 bands). This will of course increase the readout noise; however, not more than approximately 4.14 electrons – which is still very good. The peak SNR alone, is of limited use as it only indicates the maximum SNR obtained in a band that is close to saturation, but the total quantum efficiency of the whole system as a function of wavelength is also needed.
To obtain really useful information, the SNR curve needs to be specified for a given input radiance and a given (and operationally realistic) integration time. This information should be part of any test report for any system. A poorly designed system can be specified for the full range 400-1000nm, and have a decent peak SNR in the middle of the spectral range, but still be unusable for several tens of nanometers at the start and end of the spectral range. A qualitative assessment of the SNR characteristics can be done by viewing single band images at the edges of the spectral range of the system, as shown in the figure below.

First and last band of a Mjolnir V-1240 hyperspectral cube and example radiance spectrum from vegetation (single pixel/band, no binning).

Stable and traceable calibration

It is very important that the radiometric calibration is traceable to e.g. NIST or PTB standards, and the calibration procedures and standards used (including their accuracies) should be available for the users.
To achieve good absolute orthorectification accuracies (on the order of the pixel size, i.e. ~2.7cm at 100m altitude for Mjolnir) the sensor manufacturer must supply a very precise sensormodel as part of the calibration data. A precise sensor models is key for direct georeferencing.
Any hyperspectral system needs to maintain a stable and accurate radiometric and spectral calibration outside a controlled environment. A perfect calibration at the factory is worthless if this is not stable after transport and during operations. This means that the spectral, radiometric and geometric calibration must be stable with different temperatures, pressures and when exposed to heavy vibrations.
HySpex Mjolnir cameras are tested in a climate chamber from -20 degrees to +50 degrees. For the S-620 system the FPA temperature is stabilized at 180K, since a low temperature is critical for the noise performance of MCT detectors. For VNIR cameras (using CMOS sensors) thermal effects on the sensor is also accounted for, ensuring accurate radiometric calibration, even when the ambient temperature varies. Each instrument produced also goes through a rough vibration test program to ensure that the system remains stable after leaving our facilities.

Stable Platform (Roll, pitch heading stabilization)

There are two main categories of UAV platforms:

  • Rotor/copter solutions, and
  • fixed wing solutions.
There are some main advantages and disadvantages with both solutions that should be evaluated carefully when flying a pushbroom hyperspectral cameras:

Copter solutions (usually octocopter) is a very good platform that gives the user full control of all the flight parameters. The altitude, speed and direction can easily be adjusted, giving great flexibility optimizing integration time and frame rate. Copter solutions also require less logistics to fly than the fixed wing solutions and usually can carry more payload compared to the overall size. However, rotary UAVs typically have a shorter flight time than the fixed wing solutions. Another disadvantage is that the platform is exposed to significant vibrations during flight and that the system is doing a lot of high-frequency rolling and pitching to try to stay level. To compensate for this, a high-quality gimbal is recommended, which is illustrated in the figure below.

No Gimbal with Gimbal
Typical flightpath from PARGE for a flight done with[bottom] and without[top] gimbal under same wind conditions.

As is evident form the images above, a gimbal is highly recommended for a copter platform. Today there is no problem getting an octocopter platform that can carry the combined VNIR and SWIR solution (Mjolnir VS-620) with gimbal and being in total below the 25Kg limit and flying around 20-30 minutes. The Mjolnir solutions have been tested and integrated on several different platforms and our portfolio of qualified platforms is expanding. Integration on new platforms is generally straightforward. Fixed wing solutions can usually fly relatively long, they are more stable when it comes to rolling and pitching and using a gimbal is not as important as for a copter solution. The disadvantages are that usually you often have a very confined space for the payload and need more space for take-off and landing. Another limitation with fixed wing solutions is the flight speed vs altitude. For the UAV to fly you usually need a speed of around 20-25m/s. If you are limited to for example max altitude of 150m above ground, you would need to run your system at very high frame rates which will basically limit the max integrationtime you can use and therefore also SNR in the images. For higher altitude flights with longer range and higher coverage requirements the fixed wing solution is more attractive.

High performance INS and good boresight calibration for direct georeferencing

A commond assumption or misunderstanding is that you need ground control points in every flight line to get good absolute position accuracies in the orthorectified images. This is not the case. If you have a very good geometric characterization (sensormodel) and a stable system and a good INS you can do direct georeferencing with very high absolute position accuracies. To get good direct georeferencing results you need the following:

  • High performance INS system
  • Good GPS antenna
  • Avoid interference from motors/speed-controllers on the navigation system and antenna,
  • Alignment of INS
  • Accurate boresight calibration
  • High resolution DEM
  • Mechanical stability between sensor and INS
  • Accurate sensormodel
All HySpex UAV sensors are delivered with a precise boresight calibration , based on a boresight calibration flight near our premises. This means that when we deliver a system we have already defined the angles between the IMU coordinate system and the camera coordinate system, so that the INS data is already transformed to the camera coordinate system. To do this we do a very accurate boresight calibration flight over and area with 50 well defined GCPs. Here is an example of such a boresight calibration image from the V-1240 taken at ETH in Zurich:

ETH Zurich
Boresight site in Zurich (subset). Zoom in to see automatically detected GCPs in the lower image.

This is an optimal layout of the boresight site. There are yellow markings on the road that are measured with a DGPS with around 2cm accuracies in X and Y and 5cm in Z. As you can see we have tilted the flight path so that the GCPs enters the FOV in left side and exits the FOV in the right side. Since we have a hyperspectral camera, the yellow points can easily be detected automatically and that makes it easy to do the actual boresight calibration.. For the site above we achieved 9mm accuracy in X and Y direction after rectification. As mentioned above you need a high-performance INS to do direct georeferencing with high accuracy. HySpex normally provides a fully integrated solution with the Applanix APX-15 UAV INS. Mjolnir systems can also be delivered without an internal INS solution (using your existing INS system) In this case, the only interface between Mjolnir and the external INS will be an event signal coming from the Mjolnir system to have full control of timing and synchronization.

Operational parameters and training

The Mjolnir system is a completely self-contained system including hyperspectral sensor, INS, high performance (i7) onboard computer, and link for remote control, with everything ruggedly mounted inside a single chassis with easy interface to the selected gimbal solution. The ground station and SW gives full control of all relevant sensor parameters as well as real time previews on the ground station.

The system also has an external IO connector containing externally available 12V power, trigger and a second event input. This makes it very easy to integrate with other hardware like an RGB camera, thermal camera, LIDAR etc.

As for any system, you would need to operate the systems in the best way for your operational scenario to achieve the highest data quality. At NEO we always offer comprehensive training for the hyperspectral camera and the UAV operation. Different flight campaigns may have different sets of operational requirements (e.g. spatial resolution, size of area) and it is very important that the user has a good tool to define the correct flight parameters to match the operational requirement. The tool must identify the correct parameters to get optimal dynamic range and SNR in the images. A good training program and good tools for flight planning and optimization of camera parameters ensures that the user is able to operate the camera correctly and optimize the operational parameters (integrationtime, frame rate, flight speed, altitude, overlap, etc) in order to achieve the best SNR and overall data quality.

The keystone effect and its influence on classification results

Every industrial hyperspectral detector on the market can distinguish between an apple and a tomato due to their different spectral signatures, and the typically large number of pixels available on each object. Will this still be the case if the apple and the tomato were only a few pixels large?

The keystone specification of any hyperspectral system is one of the most important parameters defining its usability. However, it is also the most overlooked parameter during the selection phase. Keystone introduces a spatial misregistration problem between the spectral bands. It means that the spectral information in an image pixel is made up by parts of the spectra from different spatial positions on the scene around that pixel. In essence, keystone in the data will generate unphysical spectra that does not correspond to the objects in the image.

Evaluating the performance of a hyperspectral camera based only on the nominal specifications can be very hard. It is common to encounter misleading keystone values and it is not uncommon to find values for keystone in datasheets defined simply as “low” or “negligible”, either. This hardly provides useful information by itself, so it is preferred to define it as a percentage of the pixel size e.g. 10% of a pixel (or supply a keystone map of the whole sensor). Some, of the many different suppliers of hyperspectral cameras, offer solutions with nominal keystone values in the range of 10% to 75% of the pixel size (some even try to make it look smaller by defining it as e.g. ±35%), and even higher values for less robust systems.

Suppose that a pushbroom-scanning hyperspectral camera is being used, with e.g. 1240 nominal pixels specified in the datasheet and that the keystone in the system is, misleadingly, simply defined as “low”. Without further information, the ability to record up to 1240 unique and correct spectral signatures per scanned spatial line, thus reproducing the real radiance from each pixel in the scene, should be expected. Further assume that there are two objects in the scene, positioned next to each other and that these objects are spectrally different. Should “low” in this case, really mean 75% keystone, 75% of the energy in the most misaligned bands is coming from the pixel next to it. Note, that this is not the same scenario as one pixel containing two different objects, resulting in a linear mixture of two spectral signatures. With keystone in the data, some part of the spectral signature will come from one of the objects, while other parts of the spectra will have a 75% influence from the object next to it - giving an unphysical spectral signature. The keystone effect will effectively reduce the spatial resolution, as larger objects will be needed to get one pure pixel – one that is not influenced by the surrounding ones. This will obviously introduce effects like misidentification, misclassification and false alarms in this data product.

So, is low keystone distortion something that should only be expected in a scientific-grade camera? Is low keystone distortion only needed for high-end research, or is low keystone a requirement also in e.g. an industrial camera? Consider e.g. sorting of recycled glass. A major problem in the glass recycling business is that glass-ceramics (transparent and colored) are being put in the recycling containers for normal glass. When the recycled glass is melted, large glass-ceramic pieces (> 2mm) will not melt at the same temperature as the float (normal) glass, causing defects in the finished product. Such impurities could also destroy or damage the glass cutters and other equipment along the process.

To effectively remove the glass-ceramics that could potentially destroy a recycling batch, it is required to detect contamination pieces as small as 2 mm. Suppose a 1 m wide conveyor belt is being used to transport the cullet at 1 m/s. To stay above the Nyquist resolution limit a camera with a 1000fps speed or more is needed. Additionally, a camera with 1000 or more spatial pixels is needed to get two pixels per 2mm object in the across track direction.

Using actual data, collected from a recycling facility, the spectra from 3 different glass-ceramics pieces will look e.g. like the samples selected below:

Actual spectra from 3 different glass-ceramics pieces.

Additionally, we have the spectra from 4 different recycled float glass types (green glass, brown glass, transparent – with weak blue component – and grey glass):

Four different recycled float glass types.

By creating an artificial scene with glass- ceramics samples that are 1, 2, 3 and 4 pixels large and all have the float glass spectra as background, the effect of keystone can be demonstrated:

Keystone in an infinitely sharp camera.

In this example, an infinitely sharp camera is being used - i.e. the PSF is not considered and the objects are perfectly centered on the pixel in both X and Y direction. In a real scene, these factors must obviously also be considered when selecting the correct pixel resolution for the camera.

Adding 10%, 30% and 75% keystone to the scene, the resulting RGB preview of the objects is shown in the figure below. Note that keystone has only been added in the horizontal directions, so there will be no distortions in the vertical direction.

Demonstration of keystone.

As can be seen, the keystone is making some changes in the pixels that are visible in the RGB bands, but will it affect any classification results - e.g. a simple classification algorithm such as “Spectral Angle Mapper” (SAM)? This algorithm determines the multidimensional angle between two different spectra. The smaller the angle the more similar they are. With this algorithm, different threshold values can be set for the angles corresponding to the difference between the spectrum of the object to detect and the other objects that are in the scene. In this example, only the default angle in ENVI (0.1) is being used.

Running SAM on the original image, without keystone gives the following results:

SAM, infinitely sharp camera.

All the objects in the scene are detected unambiguously.

Repeating the exercise, attempting to detect the same objects in the data with 10%, 30% and 75% keystone, yields the following results:

SAM, cameras with keystone.

In this case, 10% of keystone does not change the results, even for the single pixel objects. For a system with 30% keystone, the contribution of the effect is starting to be obvious. Looking in detail, with the Float1(yellow) background, all the Ceramic pieces are detected, but some of the border pixels are left unclassified (black pixels in the SAM image). For this application, where glass-ceramics is the detection target, that would not matter, but in other applications, Float1 could have been the target, and a one-pixel object would have been missed.

Looking at the Float2 background section (white background), it is noticeably more border pixels missing, but the glass-ceramic pieces are classified correctly. With Float3 as background (magenta), 30% keystone has no influence on the classified result. On the other hand, for Float4 as background (brown), the detection is starting to fail and the objects are no longer classified correctly. Note that this is with only 30% keystone. Furthermore, one pixel on the Ceramic3 is no longer classified correctly, and there are some wrong border pixels on Ceramic2 as well.

Continuing to the last case, with 75% keystone, the amount of classification errors has increased noticeably: Ceramic 2 classifies as Float3, Ceramic2 classifies as Ceramic1, Ceramic1 classifies as Ceramic2 and many of the glass-ceramic pixels are not belonging to any classification class. The worst case is Ceramic2 on Float2 background, where only one pixel on the 3-pixel object is classified correctly and 4 pixels are wrongly classified (in the object and at the borders).

From this simple example, it is evident that the keystone introduces huge errors in data that would result in a significant number of missed objects and many false alarms. As mentioned above, a camera with a large keystone value, will effectively have reduced spatial resolution - meaning that higher resolution is needed to get one pure pixel if the keystone amount is the same per pixel.

As shown above, keystone will corrupt the results when the spectra that is to be detected in known. Since keystone is basically creating unphysical spectra, it will also corrupt other commonly used algorithms such as Anomaly Detection (CRX) and Principal Component Analysis (PCA).  

With the latter, a benchmark is needed to assess how the PCA is corrupted by the keystone. Using a scene where there is a horizontal displacement of the object relative to the pixels, gives a linear mixture of two objects (spectra) without any unphysical spectra being generated (as mentioned above). To compare with the keystone effect, PCA bands 2,3 and 4 will be used to generate false-color image for the image with objects being displaced 75% of a pixel and finally with the scene imaged with 75% keystone.

PCA classification.

Using this method, Ceramic1 and Ceramic2 cannot be easily distinguished where the scene is imaged with 75% keystone. Also, the background colors for float 1, 2, 3 and 4, has changed, indicating that the unphysical spectras have rotated the PCA into a new direction.

For the different hyperspectral camera technologies there are different origins of the keystone effect, making it important to understand the design of the sensor to be used – should reliable information on keystone values not be available. Also, it should be noted that essentially all instruments will be affected by keystone to some degree. Any misalignment between the bands will cause errors like those described above. As an example, an increasing number of commercially available systems are offered with a common slit combining the VNIR and the SWIR spectral range. The main argument for using a common slit, is that both the VNIR and SWIR sensors are imaging the exact same spatial location at any given time. This allows for potentially very good co-registration between the VNIR and SWIR spectral ranges, if (and only if) the system is stable, both detectors are triggered with the same trigger and the same exposure time (integration time) is used. However, a common slit system with 75% keystone essentially eliminates all the arguments for having a common slit system in the first place, as there is already a 75% of a pixel misalignment between the bands. In this scenario, the data quality would be better if you use two separate instruments with low keystone (e.g. 10% of pixel size or better) and then make a precise geometric co-registration of the data. For airborne or UAV use, any dual camera configuration (not with common slit) with an accurate boresight calibration and characterization (and with a reliable, high-precision navigation system) should be able to get better than 30% of a pixel co-registration between the VNIR and SWIR spectral range. Provided that each of the spectral ranges have e.g. 10% keystone, the system will perform significantly better than a common slit system with 75% keystone.

To conclude, misalignment between bands can have many different origins and the errors in the data caused by this effect is not negligible. When acquiring a hyperspectral camera, it is imperative to have sufficient information about the distortions affecting the system. A serious supplier of hyperspectral imaging systems will always deliver a test report with the hyperspectral camera where effects like spectral and spatial resolution, keystone and smile effect, straylight, SNR as a function of wavelength, NER and so on are measured and characterized on the actual camera that is supplied. Using only Nyquist as a reference when selecting a camera (selecting a camera with 2 pixels per the smallest object you want to detect) will only be a good idea if the camera have very low keystone (< 10% of a pixel). For cameras with e.g. 75% keystone, you should have at least 4 pixels per smallest object that is to be detected - but keep in mind that even with a pure pixel from a 4 pixel object, the keystone effect will also introduce a lot of false alarms, which must be acceptable for the operational scenario.

Finally, an important specification for any hyperspectral camera is its stability and repeatability. Suppliers offering systems with high keystone levels (e.g. 50% or more), cannot at the same time claim that the system provides great stability and repeatability. A system with a lot of keystone will essentially never be repeatable, as the behavior of a pixel will change depending on the spectrum of the object in the pixel next to it - thus making it very unrepeatable.

Data quality: HySpex Mjolnir VS-620

In this post we would like to share some performance parameters of our Mjolnir VS-620 for UAV, field and airborne applications. (we recently also designed a 1m lens for lab applications)

In the past we have published an article about data quality from UAV applications, covering topics like; operating it correctly to the importance of having sharp optics: https://www.linkedin.com/pulse/scientific-grade-hyperspectral-data-from-uav-platforms-trond-l%C3%B8ke/

In this article we will show you the measured key quality paramters of the VS-620 system. In the below table we list the top level specifications of the VS-620.

Compare
 

SpecificationHySpex VNIR-1024HySpex VNIR-1800HySpex VNIR-3000nHySpex SWIR-384HySpex Mjolnir VS-620HySpex Mjolnir V-1240HySpex Mjolnir S-620
Spectral range (nm)400 - 1000400 - 1000400 - 1000960 - 2500400-2500400 - 1000970 - 2500
Spectral sampl. (nm)5.43.262.05.453.0 | 5.13.05.1
Spatial pixels1024180030003846201240620
Spectral channels108186300288490200300
Field of view (deg)16.116.51616.0202020
Pixel FOV (mrad)0.28/0.560.16/0.320.096/0.320.73/0.730.54/0.540.27/0.270.54/0.54
Bit resolution (raw data)12161216161216
Noise floor (e-)112.42.41502.34| 802.3480
Dynamic range3400200001100075004400 | 10000440010000
Peak SNR>300>240>170>1100<180 | <900>180>900
Max speed (fps)690260117400100285100
Power cons. (W)630303050 (with DAU/INS)50 (with DAU/INS)50 (with DAU/INS)
Dimensions (cm)30.5 - 9.9 - 1539 - 9.9 - 1539 - 9.9 - 1538 - 12 - 17.537.4 - 20 - 17.825 - 17.5 - 1725.5-17.5-17
Weight (kg)4.25.05.05.76.04.04.5

The table above actually does not tell you anything about how sharp your optics are relative to the pixels and bands. That’s why in this article we will show the actual optical performance of the VS-620 data product. All plots below are taken out of the testreports that we deliver with every system.

Spatial Resolution

SWIR detectors are usually very expensive and have limited amount of pixel relative to the VNIR range. This is one of the reasons why its especially important to gather as much information as possible per pixel in this spectral range. Below we show the measured FWHM of the point spread function across track and for all wavelengths.

When we measure FWHM of such sharp systems we use the trapezoidal approximation of the FWHM, this is a very pessimistic way of measuring it, so in real life the actuall FWHM is even better than the plots.

As a conclusion for the S-620 the FWHM is sharper than 1.3 pixels everywhere in the FOV and for all bands. 

In the plot below we show the FWHM of the spatial PSF of the V-1240. 

The conclusion for the V-1240 is that this is also extremely sharp per pixel everywhere in the FOV and for the whole spectral range.

Spectral resolution

In the specifications you can see the number of bands in each camera module. To know more about how sharp spectral features the imaging spectrometer can resolve you need information about the spectral resolution (sharpness). To measure this, we calculate the FWHM of the spectral PSF for sharp spectral sources. The graph below shows the spectral FWHM for 12 narrow band sources across the spectral range for the S-620.

The graph below shows the spectral FWHM for 23 narrow band sources across the spectral range for the V-1240.

V-1240 and S-620 are very sharp per band for all wavelengths and across the FOV.

Keystone

As explained in one of our previous articles on LinkedIn; The keystone specification of any hyperspectral system is one of the most important parameters defining its usability as an imaging spectrometer. However, it is also the most overlooked parameter during the selection phase (to read more: Keystone effect )

The keystone effect in the S-620 is shown below. To measure keystone we scan a point source, at infinity, through the whole FOV of the camera with steps of 1/100 of a pixel. 

The S-620 has less than 5% of a pixel keystone everywhere in the FOV.

Below you can also see that the V-1240 have extremely low keystone values as well.

SMILE

To measure the smile we use the same narrow band spectral sources as for the measurements for the spectral resolution. In the VS-620 data every band has one center-wavelength, the smile tells you how much the center of gravity of a spectral source would vary across the whole FOV. Below you see this curve for the 12 narrow band sources for the S-620. 

This graph shows that you have less than 10% smile everywhere in the FOV for the S-620.

The graph below shows the smile for the 23 narrow band sources for the V-1240.

The V-1240 also show very small smile distortions (less than 10% of a band).

Signal-To-Noise Ratio (SNR)

When designing a small UAV camera you would typically use a compact camera/detector in the imaging spectrometer. To get as many spatial pixels and spectral bands as possible you would typically select a detector with small pixel pitch. In the V-1240 we use a very good detector with 3.45um pixel pitch and in the S-620 we use the first ever 10um pixel pitch MCT detector with exceptional performance. Smaller pixel pitch means smaller full well witch again means smaller peak SNR. Even with this small pixel pitch, the Mjolnir systems give you impressive SNR values. The S-620 have F1.9 optics while the V-1240 has F1.8 optics, this makes the Mjolnir very light sensitive as well. The SNR values will be given for the following incoming radiance:

The SNR values for S-620 without any binning (620 x 300) is shown below. 

For the V-1240 you see the SNR curve below.

The SNR curve for VNIR is in the full resolution 1240 pixels x 200 bands, in the VS cube you will bin the cube 2 time across track and 2 times along track to exactly match the pixel size of the S-620, then you will get two times higher SNR, it peaks at 360.

Demonstration and sample data

The camera above is one of our demo units (SN5005 and SN7003). Since the first public demonstration during the EARSeL conference in Brno (Mjolnir at EASRSeL) this system have been going from one UAV campaign to the other, around the world. If you want us to come and do a demo at your site or if you are interested in any of the demo data that we have acquired, please contact us. Here is an example of 3 flight areas from the VS-620. They are georefernced in PARGE, developed by RESE. The system is boresight calibrated. Below you can see some false color RGB previews from some of our demos.

Preview from Canada using a Digital Surface Model (DSM) with 2cm accuracy so we got better than 1 pixel matching in the overlapping regions between the parallel lines using Direct Georeferncing with no GCPs on ground. (Nearest Neighbor resamling)

Antoher preview from Canada (Nearest Neighbor resamling):

Preview from Denver using a 1m DEM (Nearest Neighbor resamling):


Nyquist: Sharp optics per pixel or many pixels per PSF

There are extremely many reasons for making sharp optics per pixel and per band. For any given detector, with a given detector pitch, you will always get more information out of your imaging spectrometer with sharp optics (fact 1). Therefore, our current line of hyperspectral cameras are really sharp spectrally and spatially.

For any given optical system, you will always get more information out of the system, the more pixels you have per point spread function (PSF) (fact 2). When we come close to 2 bands FWHM in the spectral direction, HySpex have added a “N” to the name telling the used that this system has close to 2 bands PSF. This is to distinguish between our extremely sharp camera and our Nyquist cameras (“N”). Nyquist theorem states that “data must be sampled at least twice per cycle in order to reproduce it accurately”. If we convert this to PSF thinking and pixels, it will be in this line: The PSF of any optical system should be samples with at least two pixels in order to reproduce it accurately.

There are many trade-offs when deciding what system to make. HySpex currently offer one system that have the “N” added to the name; the VNIR-3000N.

Specification
Spectral range (nm)400 - 1000400 - 1000400 - 1000960 - 2500400-2500400 - 1000970 - 2500
Spectral sampl. (nm)5.43.262.05.453.0 | 5.13.05.1
Spatial pixels1024180030003846201240620
Spectral channels108186300288490200300
Field of view (deg)16.116.51616.0202020
Pixel FOV (mrad)0.28/0.560.16/0.320.096/0.320.73/0.730.54/0.540.27/0.270.54/0.54
Bit resolution (raw data)12161216161216
Noise floor (e-)112.42.41502.34| 802.3480
Dynamic range3400200001100075004400 | 10000440010000
Peak SNR>300>240>170>1100<180 | <900>180>900
Max speed (fps)690260117400100285100
Power cons. (W)630303050 (with DAU/INS)50 (with DAU/INS)50 (with DAU/INS)
Dimensions (cm)30.5 - 9.9 - 1539 - 9.9 - 1539 - 9.9 - 1538 - 12 - 17.537.4 - 20 - 17.825 - 17.5 - 1725.5-17.5-17
Weight (kg)4.25.05.05.76.04.04.5

Why we offer both Nyquist and non-Nyquist cameras

For UAV application we have made small imaging spectrometers for both VNIR and SWIR region, here you select a detector with small detector pitch and you have limited space and weight available. So here you have a detector chosen for the application and we make as sharp optics per pixel and per band as possible. Thus, our UAV systems are very sharp and they are not Nyquist cameras.

For our classical cameras that are generic and used for all different applications and all different platforms we have made optics for specific high-end detectors, then we have made extremely sharp optics per pixel and per bands with very low distortions. (VNIR-1800, VNIR-1024, SWIR-384). There are many applications where framerate is very important, when having a sharp system you will get the most amount of information per data rate, so then sharp systems are preferred.

For our VNIR-3000N camera we have taken the imaging spectrograph for the VNIR-1800 camera that is made for 6.5um pixels and replaced the detector with the state-of-the-art detectors with 3.45um pixels. In this hyperspectral camera we have better than 1.6 pixels spatial PSF for 3000 pixels and better than 1.8bands spectral PSF for 300bands. You can get data out in 600 bands as well. The max framerate of this system 117fps at full resolution.

Fact 1 tells you that you should have as sharp system as possible, but if you are making many different systems and you want to build model / algorithms that can be used on any of them despite where the centre wavelengths are positioned, it would be good to have the system as a Nyquist camera spectrally to be able to reproduce the actual spectrum and make repeatable models and algorithms. That’s why for industrial cameras this would be a plus. But industrial cameras usually also need to be very fast, so the spectral PSF should not be more than 2 bands, because then that would compromise the amount of information you are getting per data rate.

As described in this post there are many trade-offs when deciding what kind of system to design, in general we design systems to be really sharp per pixel and per band, and in some cases we design Nyquist cameras for specific applications or specific platforms.

Whatever system you would use for your application, a good measure of the number of effective pixels of any camera is the number of spatial pixels used on the detector divided by the FWHM of the PSF in pixels. The same goes for the number of effective bands. Its always good to check this parameter for any camera you would use, this makes it also easier to compare cameras that have similar overall specifications on the datasheet. Here is a link to an article that is comparing two cameras with similar specifications but very different sharpness.