Course on Computers in Microscopy
Royal Microscopical Society
University of Cambridge

14 - 17 September 1998

Course Organiser: Dr D M Holburn, University Engineering Department, Trumpington Street, Cambridge.

Lecture Synopses

The following pages summarise the content of the lectures planned for this course. Please note that detailed schedule for the course is still under discussion, and the organisers reserve the right to add to or otherwise change the content of the lectures and other activities organised for the course.  Full lecture notes will be provided for all participants on registration. Details of the practical sessions, which take place in the afternoons, and of the evening programme (informal presentations and lectures) will be provided on the first day of the course.

We look forward to welcoming you to Computers in Microscopy!

Fundamentals of Image Processing I

This lecture will attempt to justify and explain the established use of computers in the acquisition and processing of image data. It introduces the fundamental concepts of image manipulation by computer, on which the majority of subsequent lectures are based. It begins with a discussion of the characteristics of natural images, which are continuous, finite spatial functions, and goes on to describe the key digitisation processes by which natural images may be converted to digital images, which consist of arrays of pixels: viz: sampling and quantisation. The importance of pixels, the values they assume (greyscale), their organisation in space (tessellation) and the ways in which they are connected (connectivity) are outlined. The dangers implicit in this conversion process, and the constraints upon the computing hardware - for example, memory requirements - are discussed. Several examples of digitisers are referred to, including densitometers, television tube based systems, and scanning instruments (like the SEM). There will be opportunities to see several of these systems in operation during the practical sessions.

Simple tools and techniques applicable to digital greyscale images are then covered. These include the grey-level histogram, which can be of immense value in correcting digitiser or other instrumental settings. An adaptation of the technique (grey level histogram equalisation) allows images of low or uneven contrast to have their contrast enhanced automatically. An example is presented showing how contrast enhancement is achieved. A more general technique involving the transformation of an input image to an output image by systematic modification of pixels in a so-called point operation. Other operations based on simple arithmetic are possible, and serve many valuable purposes in image processing. A number of examples will be covered. An important class of operations based on 'four-function' arithmetic is known as temporal filtering, which can be used to reduce the noise content of images obtained from 'noisy' sources such as TV cameras or the SEM.

Neighbourhood or local operations, in which many pixels may determine the value of a single output pixel are described. These are often referred to as spatial filters, or spatial convolution. Several examples including low-pass filters, high-pass filters and edge detectors will be discussed and illustrated. All have in common the concept of a kernel - a small array of numbers - which defines the behaviour of the filter. These operations find applications in all branches of image processing and analysis. A later lecture will show how it is often possible to carry out these kinds of operation in the frequency domain using Fourier transforms.

All the techniques discussed in this lecture can be explored 'hands-on' using the PC-based EPICplus package provided in the laboratory sessions.

Video Cameras

The 1980s saw a rapid expansion in the use of video technology in all areas of science and technology. A key factor was the development of the solid-state charge-coupled device sensor (CCD) which has led to the availability of high performance, compact video cameras, ideally suited for interfacing to optical and other forms of microscope.

A brief introduction to the principles of operation of the CCD is given, and the device and its characteristics are compared with the older tube-based cameras. The CCD is not without its own limitations, however, and an appreciation of these can help avoid problems when specifying a system using CCD cameras.

The issues of spectral response, blooming, low-light performance and dark current are considered. Some indications of trends in the development of new, enhanced CCD cameras are discussed.

An exhibit of CCD cameras and related devices will be on display during the laboratory sessions.

Fundamentals of Image Processing II

This lecture commences with a brief overview of the techniques available for converting digital images into hard-copy on paper or film. Notwithstanding the 'Paperless Office', these are assuming progressively greater importance as the use of computers in information handling and publishing becomes more widespread. Three major approaches are considered..

A number of these techniques will be on display during the practical sessions.

The remainder of the lecture provides an introduction to Fourier techniques which find widespread use in image processing, enhancement and reconstruction, in all branches of microscope. The Fast Fourier Transform is introduced as an efficient computational technique for converting digitised images into the fourier or frequency domain. There are strong analogues with physical optics, since the result of this conversion is closely related to the optical diffractogram. The FFT is reversible, in that frequency domain information may be converted back into corresponding spatial domain images, so one way of regarding this approach is as an alternative representation for image data.

An important application of the FFT is in computing the convolution of a mask with an image. Where the mask is large, this approach is much more efficient than direct evaluation in the spatial domain. The Fourier domain image contains a great deal of information, and can usefully be displayed graphically. It has a characteristic form, in which information corresponding to different frequencies in the original image are seen at different positions. The FFT is thus an important diagnostic tool in its own right. A number of familiar examples are presented. This leads to an interactive technique for specifying and applying high-pass or low-pass filters. In fact, an alternative view of Fourier techniques simply treats them as an alternative route to spatial filtering discussed earlier; however, this explanation is really too simplistic as it ignores many important consequences that arise from this approach. A simple example, showing the use of the FFT to perform high- and low-pass filtering.

Further applications of Fourier image processing include modelling or correcting for optical or instrumental image degradations (image restoration) using - among others the Inverse filter, the Wiener filter, and the Power spectrum matching filter. It can also be used in the reconstruction of 2D objects from 1D sections (or 3D objects from 1D sections), and in cross-correlation of images, for purposes of alignment, registration, or identification of features. It is also an important element in certain techniques for image data compression.

Several of these techniques can be demonstrated in the practical sessions using the EPICplus and SEMPER image processing packages.

Colour imaging techniques

Much of the published work on image processing and analysis concerns monochrome images, which are effectively maps of intensity, or some other single response. However, natural (optical) images formed with white light contain much more useful information, in the form of colour, and these can be captured using suitable multispectral (or colour) sensors . The use of colour is well established in optical microscopy, and includes such techniques as polarised-light microscopy for geological studies, and fluorescence microscopy.

Driven partly by market forces associated with broadcast TV, colour video cameras are now a mature and affordable technology. New developments promise even better performance and convenience of use. Framestores are now available which can capture full-colour images in a number of alternative forms, and there is considerable interest in extending the techniques of image processing and analysis to colour images, and many new methods are now beginning to emerge.  This presentation will address the issue of colour management and demonstrate the use of colour digital image analysis.

The use of colour makes additional demands of the scientist. Every part of the imaging system imposes its effect on the captured image, and the optical system must be chosen with some care if poor performance is not to result, and great care is necessary in the control of illumination if colour information captured by a camera is to be meaningful. Accurate colour reproduction is an essential component in the effective use of digital imaging techniques in light microscopy.

The colour reproduction process begins with an understanding that colour is the result of three key elements; light, the illuminated object, and the observation method.   Colours we think of as  'white' can vary significantly in their spectral distribution, e.g., skylight is actually a bluish white while tungsten bulbs produce a yellowish white. Colour gamut is the range of colours that an imaging system can produce and a wide variation in gamut exists between methods. For example, digital colour printers and printing presses have different colour gamuts. They seldom capture all the colours in the original scene but each can simulate the appearance very successfully if colour reproduction is understood and controlled.  Colour management is the overall process that ensures that the colours you see in the original image are matched as accurately as possible in the reproduced image. Standardization and calibration are essential in order to produce consistent and repeatable results. 

Measurements from Images

This lecture covers the principles and methods of making measurements from two dimensional images.

In particular the lecture covers:

The lecture includes the basic techniques of morphometry moving on through the more complex stereology procedures. Stereology procedures derived for istotrophic specimens will be described first, then the adaptations required for the problems associated with anisotrophic specimens.

Examples of measurements procedure will be reviewed with respect to the Highly Optimised Microscope Environment (HOME), a new concept in microscopy designed by a European Consortium. This system is unique in that the monitor is projected into the field of view and therefore all measurements are made whilst in the microscope environment. However, the principles are applicable to many imaging systems i.e. Improvision, Kontron, Roche when the necessary facilities are incorporated, i.e. grids etc.

Fundamentals of Image Analysis

This introductory lecture offers an introduction to image analysis. It provides a sound foundation of basic techniques on which a number of techniques covered in later lectures rely. Image processing involves subjecting images to operations which produce new images which may be enhanced forms of the original. Image analysis, on the other hand, involves the extraction from images of information which is in some other form - for example, measurements or the number of items of a particular size. Before it can be analysed, a greyscale image must normally undergo one or more operations to distinguish the important objects in it from the background. This procedure is referred to as segmentation, and produces a binary or bi-level image. A simple approach to segmentation is illustrated by thresholding, which continues to be the technique used in most systems. A number of approaches to thresholding are discussed, covering situations where the original image contains significant noise, or is unevenly illuminated. Another approach uses edge-detection to seek out the edges of objects. There are strong similarities between the edge detectors used and the highpass filters used to enhance edges in greyscale images. Both approaches have their advantages and disadvantages, and these are discussed.

When objects have been identified, a number of methods exist for representing them. Some widely used methods are described, and the ways in which typical measurements can be made - for example, area, diameter or perimeter - are illustrated. The importance of calibration is highlighted, since without this, the results may be meaningless. Unexpected difficulties can arise when accurate measurements are needed, and these are discussed.

A number of techniques exist for manipulating the shapes (or morphology) of objects in binary images. These encompass manual schemes in which a simulated pen or brash may be moved acros the image to extend or erase parts of objects, as well as fully automated techniques in which objects may be shrunk, bloated, converted into skeletonic structures, and so on. A number of examples are shown of these techniques applied to binary images. A later lecture will show how the same general methods can also be applied to greyscale images.

The techniques discussed in this lecture can be explored 'hands-on' during the laboratory sessions using the PC-based EPICplus package, and can be demonstrated by several of the manufacturers exhibiting image analysis packages.

Basic Framestore Techniques

An important element of the hardware used for processing images by computer is the framestore. While it is possible to use computers not fitted with a framestore for image work, there may well be restrictions on the speed and range of operations that can be carried out. Most present-day computers and many workstations can be fitted with framestore accessory cards from a variety of manufacturers to enhance their suitability for this work. Many currently availability software packages are suitable for operation with a selection of framestores. This makes it possible to purchase cost-effective solutions for a range of different applications.

This lecture explains why the use of a framestore is desirable, from the standpoint of storing, acquiring and displaying images. It outlines the structure of a typical framestore and discusses the purpose of each part. For serious imaging work, there is a requirement to accept data from a video camera at high speed, and to supply video data to a display. These needs cannot normally be satisfied by general purpose computer hardware, but are standard facilities with a framestore. The fidelity with which a digital image can represent the original, natural image is governed by the characteristics of the framestore, being affected both by the quantity of memory available, and by the nature of the capture and display elements.

A framestore-based system can offer further useful facilities, including colour look-up tables or LUTs (occasionally known as a palette, which allow image intensity to be coded as arbitrary colours, and input look-up tables. More advanced kinds of framestore embody built-in hardware for high-speed digital processing. In some implementations, hardware can provide forms of temporal filtering, allowing suppression of noise while images are captured at full TV rates. The filter characteristics may be capable of adjustment, giving control over the rate at which the filtered image is built up, and the effectiveness of the process. In other cases, more general forms of programmed operations (such as background correction, histogram equalisation or spatial filtering) may be available.

For certain applications, it is necessary to capture images in full colour. A suitably equipped framestore can achieve this, through provision of additional memory and extra inputs to receive the three electronic signals normally required to represent colour images.

A selection of framestores of different specifications will be in use during the practical sessions. A number of manufacturers in attendance will provide information on a range of different types.

Techniques and hardware for image analysis

This lecture expands on the introductory material presented earlier, concentrating on morphological techniques and the ways in which these can be applied to binary images and, by a logical extension, also to greyscale images.

Like spatial filters, morphological operations make use of a kernel (usually called a structuring element) to define the nature of the operation. However, the way in which the kernel or structuring element is applied is somewhat different. The basic morphological techniques are erosion and dilation. A number of examples of their effect are given, and a comparison with similar spatial filters is presented. The effect of varying the size and shape of the SE are discussed, and ways of combining sequences of erode/dilate operations, to give open and close operations are described.

The extension of these operations to greyscale images involves considering these images as a set of one-dimensional grey-level profiles. Using this concept, analogies are presented for erosion, dilation, opening and closing operations in the context of greyscale images. A number of case studies are presented using grey scale morphological operations. These include: removal of noise by use of opening/closing operations; detection of micro-aneurisms in studies of the human circulatory system; segmentation of aluminium grains using erosion, dilation and skeletonisation.

Morphological operations may also be extended to include further advanced types. A selection of these will be described, with examples, including: the top-hat transform, morphological gradient, flooding, and the watershed operation, which forms the basis of an improved technique for edge detection and segmentation. These are illustrated using an example which involves segmentation of diseased and healthy cells in breast cancer diagnosis.

Image processing: essential tools

This lecture introduces a set of essential tools for image processing and analysis, many of which are based on the concepts introduced in earlier lectures.

For accurate work on images held on photographic film, densitometry is an important technique, and its effectiveness governs the value of later processing/analysis. A technique for bilinear interpolation is described. This can help preserve a smooth appearance in magnification, rotation and many related operations.

Many of the remaining tools depend on the use of Fourier domain processing, so the Fourier transform is briefly reviewed, including a discussion of the relationship with the diffractogram, and the appearance of the transform image under various conditions. The use of the transform for evaluating the cross-correlation product between two images is described. This is an important step in recognising characteristic image features, in alignment, and in registration. A number of examples are given. A more detailed description of Fourier domain filtering is given, with an illustration of the effects of altering the Fourier components by means of a spatial frequency response. Applications include noise suppression and edge enhancement.

Removal of gradual background variations is a common requirement in image processing. It can be done using spatial filters with 'wide' masks. These can be applied as ID filters in the spatial domain using various 'tricks' to maintain efficiency. Alternatively, infinite impulse response (IIR) filters may be used. These are a spatial analogue of the temporal filters used in some framestores to reduce noise.

Projection and back-projection are important techniques encountered in the creation of ID data from 2D data (projection) and 2D data from ID data (back-projection). These are key elements in the techniques of image reconstruction from sections.

Many of the techniques and principles covered in this lecture can be demonstrated 'hands-on' during the afternoon practical sessions.

Image Processing and Reconstruction

This lecture introduces a number of slightly more specialised tools which, though they have special relevance to transmission electron microscopy, are firmly based on the techniques already discussed and can be adapted for use with other forms of microscopy.  It also discusses a  class of methods by which 3 D information may be assembled from microscopical images.  The methods described here for 3D image reconstruction are similar in approach to the techniques of computed tomography, which are familiar to us all through their applications in medicine, viz. body- and brain-scanners.

Fourier filtering may be used for improving the appearance of images in a subjective way, by boosting or attenuating high and low spatial frequencies. A special case exists in TEM when imaging particles with constant atom-atom distance, where a different technique is more appropriate. Resolution can be estimated if two copies of the same image are available, obtained separately and with different noise content, by comparing their Fourier transforms.

The contrast transfer function, encountered in high resolution TEM and other forms of microscopy, is introduced. For TEM this can be expressed in terms of the spherical aberration coefficient, the wavelength associated with the incident electrons, the defocus, the astigmatism magnitude and azimuth, divergence, and spread terms. Diffraction patterns are useful for identifying defocus, astigmatism and beam misalignment under suitable conditions. They may also be used to assess the degree of crystallinity, and to characterise image texture.

Methods of correcting images for degradations such as blurring and lens aberrations in an objective way (image restoration) call for some knowledge of the nature of the degradation, but are applicable to all kinds of microscopy. In many cases noise is also present, and imposes a limit on the quality of restoration that can be achieved. The Inverse filter is the best known means of accomplishing this, using a transfer function to describe the degradation, but fails in some circumstances. Other filters have been devised to circumvent these shortcomings, and include various classes of Wiener filter and other 'constrained' filters. For the mathematically inclined, a derivation of the Wiener filter is provided (not necessary to understand this to use it!)

These techniques can be also used in TEM, but adaptations are necessary since complete information about the transfer function often cannot be obtained from a single image. Recording a set of images at different focal settings (the through-focal series) makes it possible to piece together the required information. Other problems occur if the specimen is thick or consists of heavy, atoms; the relationship between the transmitted electron wave and the specimen structure is far more complex, making restoration correspondingly more difficult.

Some techniques for improving signal-to-noise ratio can be used where only a single image is available. A much-used example involves averaging several aligned copies of a given image or part-image. This is of great value when imaging known periodic structures. However, good registration must be achieved if fine detail is not to be lost. A number of alternative approaches are examined, including techniques that can succeed even in the face of crystal lattice distortion and considerable noise. Some of these techniques can also be applied to images that contain isolated, repeating structures.

An attractive means of visualising 3D objects is by means of a computer-generated picture. The production of such views may be comparatively demanding in computational terms, since they require the evaluation of a suitable model for surface illumination at every point on the surface. In addition, 'hidden features' must be suppressed from the view. Modern workstations are optimised to perform the necessary computations efficiently; most have the required graphical display capability, and some have hardware-assist features to allow rotation and shading in real time.

3D views may be reconstructed from series of sections (resembling a set of contours) or from projections obtained over as wide as possible a range of angles. A number of procedures are available. These include:-

Fourier reconstruction is the only viable approach for objects such as crystals or macromolecules with strong periodicity.

Other approaches to be covered involve the use of shadows formed by a thin layer of a suitable heavy metal evaporated onto the surface at an angle. The resulting signal can be processed to provide height information, but accurate calibration may present difficulties.

Computer-assisted photogrammetry

There is considerable interest in techniques for topographic measurement in microscopy, and a number of specialised instruments - for example, the scanning tunnelling microscope (STM) and its derivatives - have emerged to serve specific needs. One technique by means of which topographic information can be obtained from most imaging instruments is stereometry. In stereometry, which owes much of its development to the science of cartography, two different views are obtained of the same specimen, usually by tilting it. Various means have been developed to allow a viewer to fuse the two images so as to visualise the 3D nature of the original object. These include prismatic and mirror viewers, colour filters, polarising filters using electronically operated shutters. To obtain quantitative information, independent measurements are made on features of interest in each view, giving parallax values for each feature. Suitable processing of the parallax data then yields 3D coordinates for the features. With the advent of computers, some progress has been made towards automating this process, but the need for a manual tilt operation hinders its application.

In SEM a number of other approaches have been tried, some based on local determination of specimen slope using a suitable detector. In others a range-finding approach coupled with automatic focusing is used to determine feature heights.

A new technique under development in Cambridge use an enhanced form of stereometry with an electronic method for achieving specimen tilt. Instead of tilting the specimen, the beam is tilted electronically using a magnetic deflection system within the column. Although the deflections obtained are small, they are sufficient for generating stereometric parallax values. The positions of corresponding features in the two images are identified automatically, using a cross-correlation procedure which runs in a dedicated high-speed digital processor, based on the Fast Fourier transform.

To further enhance the accuracy of the procedure, the specimen is scanned in a series of small zones. When the electronic tilt mechanism is operated, the specimen appears to tilt about an axis which (in the general case) is liable to be some distance from the zone being scanned. In consequence, the two images are formed under different conditions, making it difficult to identify corresponding features. To circumvent this problem, automatic adjustments are made to the focus setting in an iterative manner, to coerce the tilt axis to approach the zone being imaged. As successive zones are scanned, the focus setting tracks the specimen topography. The entire process can be made to operate without operator intervention, and can produce spot heights, line profiles or topographic maps over a region of the specimen. A number of examples will be shown to demonstrate the accuracy and repeatability of the technique.

There will be a number of opportunities to see demonstrations involving manual stereometry, anaglyph and other forms of display, and automated stereometry on the SEM during the afternoon practical sessions.

Computers in confocal scanning microscope

What is a confocal microscope? The confocal principle. 2D scanning. 3D scanning; 4D scanning and beyond ... How are computers used in confocal microscopy?

Microscope scan control; Z-axis focus control; optical path adjustments experimental management; auto focus; acquiring a digital confocal image host computer control of the framestore;

Image processing of digital confocal images

Sampling 3D space; geometric image distortions; photometric image distortions quantitation and data reduction; spatial projection; temporal projection.

Confocal optical sections, 2D presentation; Z-position or height; 3D images; stereo images; projected stereo pairs; 3D from motion parallax; height coded 3D projections., 4D visualisation; and beyond ...

Non quantitative reconstructions

Conclusions

It is expected that there will be opportunities to see demonstrations of CSM image processing during the practical sessions.

Image compression and transmission

Compression techniques

Lossless methods:

Run length coding

Huffman coding

Lempel-Ziv-Welch coding

Compression achieved

Lossy techniques:

Transform coding

JPEG (Joint Photographic Experts Group)

Fractal compression

Image file formats

Bitmap formats

PCX, GEM, GIF, TIFF and other formats

Conversion programs

The need for networks

What is a network?

Network applications

Remote access, resource sharing, electronic mail, graphics, file transfer

Data communications

Error control

Ethernet

TCP/IP

Conclusion

This lecture will include demonstrations performed on-line using the University Engineering Department's extensive networked computer system.


This page was prepared by David Holburn, and comes to you courtesy of Cambridge University Engineering Department. Last updated on 3rd September 1998. 


Send email to David Holburn