What if you could hold a physical model of your own brain in your hands, accurate down to its every unique fold? That’s just a normal part of life for Steven Keating, who had a tennis ball-sized tumour removed from his brain at age 26 while he was a graduate student in the MIT Media Lab’s Mediated Matter group.

Curious to see what his brain actually looked like before the tumour was removed, and with the goal of better understanding his diagnosis and treatment options, Keating collected his medical data and began 3D printing his MRI and CT scans. However, he was frustrated that existing methods were prohibitively time-intensive, cumbersome and failed to accurately reveal important features of interest. Keating reached out to some of his group’s collaborators, including members of the Wyss Institute at Harvard University, who were exploring a new method for 3D printing biological samples.

“It never occurred to us to use this approach for human anatomy until Steve came to us and said, ‘Guys, here’s my data, what can we do?’,” says Ahmed Hosny, who was a research fellow at the Wyss Institute at the time, and is now a machine learning engineer at the Dana-Farber Cancer Institute.

The result of that impromptu collaboration – which grew to involve James Weaver, senior research scientist at the Wyss Institute; Neri Oxman, director of the MIT Media Lab’s Mediated Matter group and associate professor of Media Arts and Sciences; and a team of researchers and physicians at several other academic and medical centres in the US and Germany – is a new technique that allows images from MRI, CT, and other medical scans to be easily and quickly converted into physical models with unprecedented detail.

“I nearly jumped out of my chair when I saw what this technology is able to do,” says Beth Ripley, assistant professor of radiology, University of Washington, clinical radiologist at the Seattle VA, and co-author of the resulting paper. “It creates exquisitely detailed 3D-printed medical models with a fraction of the manual labour currently required, making 3D printing more accessible to the medical field as a tool for research and diagnosis.”

Our approach not only allows for high levels of detail to be preserved and printed into medical models, but it saves a tremendous amount of time and money.
– James Weaver, the Wyss Institute

The little details

Imaging technologies like CT scans produce high-resolution images as a series of ‘slices’ that reveal the details of structures inside the human body, making them an invaluable resource for evaluating and diagnosing medical conditions. Most 3D printers build physical models in a layer-by-layer process, so feeding them layers of medical images to create a solid structure is an obvious synergy between the two technologies.

However, there is a problem: MRI and CT scans produce images with so much detail that the object or objects of interest need to be isolated from surrounding tissue and converted into surface meshes in order to be printed. This is achieved via either ‘segmentation’: a very time-intensive process where a radiologist manually traces the desired object on every single image slice (sometimes hundreds of images for a single sample), or an automatic ‘thresholding’ process in which a computer program quickly converts areas that contain grayscale pixels into either solid black or solid white pixels, based on a shade of grey that is chosen to be the threshold between black and white. However, medical imaging data sets often contain objects that are irregularly shaped and lack clear, well-defined borders; as a result, auto-thresholding (or even manual segmentation) often over or underexaggerates the size of a feature of interest and washes out critical detail.

The new method described by the paper’s authors provides medical professionals with a better solution, offering a fast and highly accurate method for converting complex images into a format that can be easily 3D printed. The key lies in printing with dithered bitmaps, a digital file format in which each pixel of a grayscale image is converted into a series of black and white pixels, and the density of the black pixels is what defines the different shades of grey rather than the pixels themselves varying in colour. Similar to the way images in blackand- white newsprint use varying sizes of black ink dots to convey shading, the more black pixels that are present in a given area, the darker it appears. By simplifying all pixels from various shades of grey into a mixture of black or white pixels, dithered bitmaps allow a 3D printer to print complex medical images using two different materials that preserve all the subtle variations of the original data with much greater accuracy and speed.

In the entrenched elements

The team of researchers used bitmapbased 3D printing to create models of Keating’s brain and tumour that faithfully preserved all of the gradations of detail present in the raw imaging data down to a resolution that is on par with what the human eye can distinguish from about 9-10in away. Using this same approach, they were able to print a variable stiffness model of a human heart valve using different materials for the valve tissue versus the mineral plaques that had formed within the valve, resulting in a model that exhibited mechanical property gradients and provided new insights into the actual effects of the plaques on valve function.

“Our approach not only allows high levels of detail to be preserved and printed into medical models, but it saves a tremendous amount of time and money,” says Weaver, who is the corresponding author of the paper. “Manually segmenting a CT scan of a healthy human foot, with all its internal bone structure, bone marrow, tendons, muscles, soft tissue, and skin, for example, can take more than 30 hours, even by a trained professional – we were able to do it in less than an hour.”

High hopes

The researchers hope that their method will help make 3D printing a more viable tool for routine exams and diagnoses, patient education, and understanding the human body. “Right now, it’s just too expensive for hospitals to employ a team of specialists to go in and hand-segment image data sets for 3D printing, except in extremely high-risk or high-profile cases. We’re hoping to change that,” says Hosny.

In order for that to happen, some entrenched elements of the medical field need to change as well. Most patients’ data are compressed to save space on hospital servers, so it’s often difficult to get the raw MRI or CT scan files needed for high-resolution 3D printing. Additionally, the team’s research was facilitated through a joint collaboration with leading 3D printer manufacturer Stratasys, which allowed access to its 3D printer’s intrinsic bitmap printing capabilities. New software packages still need to be developed to better leverage these capabilities and make them more accessible to medical professionals.

Despite these hurdles, the researchers are confident that their achievements present a significant value to the medical community. “I imagine that, sometime within the next five years, the day could come when any patient that goes into a doctor’s office for a routine or non-routine CT or MRI scan will be able to get a 3D-printed model of their patient-specific data within a few days,” says Weaver.

Keating, who has become a passionate advocate of efforts to enable patients to access their own medical data, still 3D prints his scans to see how his skull is healing post-surgery and check on his brain to make sure his tumour isn’t coming back. “The ability to understand what’s happening inside of you, to actually hold it in your hands and see the effects of treatment, it is incredibly empowering,” he says.


Better planning, smoother diagnostics

Three-dimensional (3D) printing technologies are increasingly used to convert medical imaging studies into tangible (physical) models of individual patient anatomy, allowing physicians, scientists and patients an unprecedented level of interaction with medical data.

To date, virtually all 3D-printable medical data sets are created using traditional image thresholding, subsequent isosurface extraction, and the generation of .stl surface mesh file formats.

These existing methods, however, are highly prone to segmentation artifacts that either over or underexaggerate the features of interest, thus resulting in anatomically inaccurate 3D prints. In addition, they often omit finer detailed structures and require time-and-labour-intensive processes to visually verify their accuracy. To circumvent these problems, the authors of this paper present a bitmap-based multimaterial 3D printing workflow for the rapid and highly accurate generation of physical models directly from volumetric data stacks.

This workflow employs a thresholding-free approach that bypasses isosurface creation and traditional mesh slicing algorithms, hence significantly improving speed and accuracy of model creation.

In addition, using preprocessed binary bitmap slices as input to multimaterial 3D printers allows for the physical rendering of functional gradients native to volumetric data sets, such as stiffness and opacity, opening the door for the production of biomechanically accurate models.

Source: ‘From Improved Diagnostics to Presurgical Planning: High-Resolution Functionally Graded Multimaterial 3D Printing of Biomedical Tomographic Data Sets’