Technology for Aligning Multiple CT Images in One Second--Supporting Doctors to Detect Changes in Diseases

Doctors must go through a tedious process to compare CT images

Computed Tomography (CT) takes cross-sectional body images using X-rays. A CT scanning is an important examination to detect diseases, such as cancer, based on multiple cross-sectional images. CT scanning is used not only for cancer screening but also for monitoring changes in diseases over time. For example, doctors can observe changes in nodules in terms of size, shape, density, etc. by comparing images taken one year ago with those taken most recently. In CT imaging, however, because a patient’s heartbeat and breathing shift the position of the nodule in every scanning, doctors must go through the tedious process of taking multiple cross-sectional images to find a match, and then manually align the position of the nodule using the cross-sections.

Generally, as the size of a nodule itself changes with the lapse of time, it is impossible to align the position of the nodule by simply detecting and identifying the nodule by comparing the past image and the latest image. There is a conventional method that automatically extracts multiple readily identifiable areas that seldom change over time or “feature points”—such as blood vessels and the edges of organs surrounding the nodule—and finds the position of the nodule by aligning these feature points between two images and detecting the shift of feature points. But when there are few feature points near the nodule, alignment accuracy declines. Meanwhile, expanding the area over which feature points are extracted can create a separate set of problems: the feature points themselves shift in different ways due to breathing or other factors, making it difficult to estimate the nodule’s position.

Basic approach for automatic alignment

Calculating the nodule position based on the shift in feature points by weighting those near the nodule more heavily

Fujitsu Laboratories has developed a technology that accurately aligns nodules automatically by using feature-point references over a larger area when there are few blood vessels or other feature points close to the nodule.

In order to deal with cases where there are few feature points near the nodule, this technology searches for feature points over a larger area than the conventional method. In doing so, this technology weights feature points based on their proximity to the nodule in order to calculate the nodule’s position. Feature points that are close to the nodule tend to shift similar to the body’s contiguity, so the shift of feature points near the nodule are weighted more heavily and those weightings are taken into account in calculating the deviations in the nodule’s position. In addition, although processing volume increases with a greater number of feature points used for comparison, processing is actually accelerated as the image features (pattern of peripheral image pixels) are simplified. In this way, matching positions across multiple cross sections can be carried out accurately and fast.

Reducing time needed for diagnosing images reduces doctors’ workload

Alignment of nodule’s positions by the newly developed method

In tests involving cases of pulmonary disease, this new technology was found to be able to align images to an error margin of less than 2.5 mm, the level needed for practical use, in 83% of cases, as compared with 33% with conventional methods. It is possible to automatically align the position of the nodule in approximately one second from the hundreds of cross-sectional images generated per CT scanning.

It is expected that this technology will contribute to reducing the time needed for manually aligning vast amounts of images for diagnosing them, which reduces doctors’ workload. It also reduces the time needed for examinations, which will result in shortening the patients’ waiting time at hospitals.

Fujitsu Laboratories is conducting practical testing on a wide range of images, with the goal of commercializing into a Fujitsu product during fiscal 2016.