Robotic Assisted Ultrasound-Guided Radiation Therapy

This work is published in IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), 2013.

More recent work includes a canine study with the system described on this page, and a newer version based on Universal Robot UR5 or UR3.

Contents

Summary

Image-guided radiation therapy (IGRT) involves two main procedures, performed in different rooms on different days: (1) treatment planning in the simulator room on the first day, and (2) radiotherapy in the linear accelerator room over multiple subsequent days. Both the simulator and the linear accelerator include CT imaging capabilities, which enables both treatment planning and reproducible patient setup, but does not provide good soft tissue contrast or allow monitoring of the target during treatment. Robotic systems for ultrasonography have been developed for other applications, as well [1], [2], [3].

robot

In this project, our goal is to construct a robotically-controlled, integrated 3D x-ray and ultrasound imaging system to guide radiation treatment of soft-tissue targets. We are especially interested in registration between the ultrasound images and the CT images (from both the simulator and accelerator), because this enables the treatment plan and overall anatomy to be fused with the ultrasound image. We note, however, that ultrasound image acquision requires relatively large contact forces between the probe and patient, which leads to tissue deformation. One approach is to apply a deformable (nonrigid) registration between the ultrasound and CT, but this is technically challenging. Our approach is to apply the same tissue deformation during CT image acquisition, thereby removing the need for a non-rigid registration method. We use a model (fake) ultrasound probe to avoid the CT image artifacts that would result from using a real probe. Thus, the requirement for our robotic system is to enable an expert ultrasonographer to place the probe during simulation, record the relevant information (e.g., position and force), and then allow a less experienced person to use the robot system to reproduce this placement (and tissue deformation) during the subsequent fractionated radiotherapy sessions. We do not attempt to do this autonomously, but rather employ a cooperative control strategy, where the robot shares control of the ultrasound probe with a human operator. One important task for the robot is to help the less experienced user reproduce the original setup.

System Description

The proposed setup is illustrated in the figure above left, where the robotic unit is attached to the LINAC table via a bridge that can slide along the bed with the help of the rails attached on the sides. At its end-effector, the robot holds an ultrasound (US) probe that obtains real-time 3D US images of thepatient’s target anatomy while the radiation beams are being emitted from the collimator of the gantry in the radiation therapy process.

As mentioned above, the image-guided radiation therapy process consists of two major phases: (1) treatment planning in the simulator room, and (2) treatment delivery in the treatment (LINAC) room. We assume that the treatment room and simulator room each contain a camera system to simultaneously track the US probe, the robot base, and the couch. In our setup, the tracker (camera) and US probe are provided by the Clarity System (Elekta AB, Stockholm, Sweden). In both rooms, the cameras are calibrated to the room isocenter and therefore provide a consistent reference coordinate system, within the accuracy of the calibration.

In the treatment planning phase, an ultrasonographer finds the anatomical target using the US probe attached at the tip of the robot and saves the 3D US image. Then, the US probe is replaced by an xray compatible model probe and, with the assistance of the robot, is brought to exactly the same relative position with the target area as the real probe, to create the same local soft tissue deformation. After that, the couch is moved inside the CT scanner and the target area is scanned to get a CT image which will then be registered to the reference US image. Now, all information is available to plan the radiation therapy process. In the planning phase, the configuration of the robot should also be taken into account when determining the beam directions to avoid irradiating the robot or US probe.

The treatment phase can be repeated on multiple days depending on the therapy planning details. In the remaining parts of this paper, the treatment phase will be referred to as “day-2”. In this phase, again with robot assistance, the US probe or model probe is brought to the same relative position with the target organ as in day-1. As in the treatment planning phase, the model probe is used when a CBCT image is acquired, in order to reduce the metal artifact in the CT image. After the clinician confirms proper setup of the patient, based on the CBCT and 3D US images, the radiation beams are delivered to annihilate the malignant anatomical structure. Because the robot continues to hold the US probe on the patient, the clinician can monitor the target to ensure that proper setup is maintained (e.g., that the target has not significantly shifted from its initial position).

Robotic System Design

setup

To carry out the robotic tasks mentioned in the System Description part, a 5 degree-of-freedom (dof) prototype robotic system is designed (figure on the left). As can be seen, the robot consists of two motorized units connected through a passive arm whose configuration can be changed and locked manually. The 2 dof rotary stages are used to set the orientation of the US probe. Mechanically, this is not a remote center of motion (RCM) device, but RCM motion can be obtained in software by coordinating motion of the linear and rotary stages. In a closer view, the first rotary stage and the parallelogram mechanism which are responsible for accurate orientation motion can be seen in figure below. In the design of the mechanism, extra care is taken to minimize the use of metals to avoid introducing significant artifacts in the CT images. A force sensor is installed between the US probe and the robot to enable the system to measure the forces applied on the probe. The use of a single force sensor minimizes the cost of the system, but makes it difficult to distinguish between forces applied by the user’s hand and by the patient’s tissue.

mechanism

The system can be considered as two fully-encoded mechanical subsystems connected by an unencoded passive arm. Ideally, the existing camera system would provide the homogeneous transformation matrix between the two ends of the passive arm. In these experiments, however, the camera system was not available and we therefore used the force sensor to obtain a coarse estimate of the 3×3 rotation component of the transformation matrix. The passive arm enables us to use more compact linear stages, which considerably decreases the weight of the robot attached to the bridge. The passive arm is mainly used to bring the US probe to the vicinity of the target area, and to provide the 6th degree of freedom (rotation about the probe axis).

Control Schemes

There are three basic procedures that must be performed with the robotic US system:

  • On the treatment planning day, which we call day-1, the ultrasonographer finds the target with the US probe attached to the robotic system. This free space motion is provided through an unconstrained cooperative control process. Once the user places the probe on the tissue, he/she disables cooperative control and lets go of the ultrasound probe. At this point, the force sensor measures the reaction force due to the contact between the US probe and patient’s tissue.
  • Still on day-1, the user replaces the US probe with a model (fake) probe and reproduces the original probe position and force with the assistance of the robot. This requires a constrained cooperative control mode, where the constraints help the user to reproduce the original probe location. Details of this cooperative control method are provided below. Note that the user cannot view real-time US images during this procedure because a model probe is installed.
  • On day-2, a different person uses the force and position information from day-1 to reproduce the original US imaging conditions. This requires a constrained cooperative control mode, as above, but in this case it is reasonable to place less confidence in the recorded probe position information due to: (1) differences between the treatment room and simulator room coordinate systems,(2) errors in patient setup, and (3) physiologic changes in the patient. We assume, however, that the recorded force information is still valid. This procedure may be performed with the US or model probe, depending on whether or not CBCT will be acquired. If the US probe is installed, the user can also compare the current US image to the day-1 US image.

Cooperative Control with Virtual Spring Constraints

controlsceme

This controller is used to help the user reproduce the initial placement of the US probe. It is used in day-1 to place the model probe for CT imaging, with the goal of obtaining the same tissue deformation as previously obtained with the real US probe. It is also used in day-2 to reproducibly place the real or model probe prior to treatment.

On day-1, we define a goal frame whose origin is at the tip of the US probe, with its axes determined by the orientation of the US probe. The constrained controller should then help the user to move the US probe tip to the x = y = 0 position of the goal frame. Next, the controller should help the user to reproduce the correct orientation by aligning the probe with the goal frame. Finally, the controller should enter a mode that helps the user to apply the specified (previously recorded) force on the patient. Note that this sequence of actions mirrors the actions typically performed during image-guided needle interventions: (1) position the needle tip at the entry point, (2) align the needle axis with the insertion axis, and (3) insert the needle along its axis until it reaches the target. When the user switches from unconstrained cooperative control mode to guidance mode, the system remains in the unconstrained cooperative control mode until the US probe crosses the x = 0 or y = 0 plane. Once a plane is crossed, a virtual spring is created and attached between the plane and the probe tip, thereby pulling the probe toward the plane (i.e., a soft virtual fixture). When both planes have been crossed, the result is two orthogonal springs that pull the probe toward the line formed by the intersection of the two planes (i.e., the line that passes through the goal position at x = y = 0). These virtual springs are visualized in figure on the left. The virtual spring force is proportional to the displacement on that particular axis.

When the linear springs are enabled, the user can switch to the rotation mode to get guidance around the x and y axes via the use of torsional springs. This procedure is similar to the linear springs, where the user must cross the goal orientation before a torsional spring is created and applied. It should be noted that the requirement to pass through the goal position or orientation before activating the spring prevents a discontinuity in the motion of the robot. For example, if a spring is enabled when the probe is far from the desired position, the controller would immediately apply a large force to move the probe toward the goal.

Between the treatment planning day and the treatment day, the existence of anatomical changes on the target organ (e.g., tumor shrinkage) or physical changes on the patient (e.g., gas in the abdominal area) raise the importance of the virtual spring approach. In such a case, the recorded goal position at day-1 will not create the same soft tissue deformation. However, it should be noted that these displacements are expected to be relatively small (i.e., on the order of 1-2 mm). With this approach, the clinician will be guided to the target area via springs with the flexibility to modify the final position and orientation.

When all springs are activated and the desired position and orientation on the x and y axes are set, which is ideally the same position and orientation that was found on day-1, the user can turn on the hybrid position/force control mode [4]. In this mode, the controller keeps the force component in the probe’s z axis equal to the desired force from day-1, while maintaining the same position on all other degrees of freedom. With this motion the robot marches into the body along the US probe z-axis. It should be emphasized that force control along the direction normal to the patient’s body is essential to compensate for patient motion during radiation therapy (e.g., breathing motion).

Experiments and Results

experimentsetup

The experiment setup in the figure on the right is used to evaluate the effectiveness of the system and the control approach. In the experiments, 2D B-Mode images are generated to simulate the treatment procedure. In order to more easily identify and analyze US images (and to visualize deformations), we created a 4×4 grid of markers inside the phantom. These markers were actually segments of pencil lead that were implanted using a transrectal ultrasound (TRUS) robot for prostate brachytherapy [5]. An example of a reference US image, representing the day-1 treatment planning image, can be observed in figure below. The performance of the virtual spring control algorithm and force control along the probe direction are evaluated via two different experiments. First, a reference US image is obtained using the cooperative control mode; this represents the day-1 reference image. At this robot configuration, the position of the robot and the force applied on the US probe are recorded as the goal information.

Experiment 1

This experiment is performed to compare the virtual spring control approach with an unconstrained cooperative control approach (free hand mode) when the recorded goal position is not exact, which is the case for the day-2 positioning tasks. Essentially, this experiment checks whether the virtual fixture created by the virtual springs is “soft” enough to allow the user to compensate for relatively small errors using the US image feedback.

usfigure

Here, an initial goal position (and force) is selected and a reference US image is acquired. The goal position is then randomly disturbed in the x-y plane by an amount between 3 mm and 5 mm on each axis, which emulates the relative motion of the target area with respect to robot base between the treatment planning day and the treatment day. The user is asked to place the US probe to obtain an US image that is as close to the reference image as possible with two methods: unconstrained cooperative control (free hand) and virtual spring control. In unconstrained cooperative control, the user does not get any feedback from the robot. In virtual spring control, however, the user will be guided to the incorrect (disturbed) goal and therefore the user must pull against the virtual springs to displace the probe until the correct image is obtained. This procedure is repeated a total of 9 times for each method, and the following information is recorded: (1) the time it takes the user to find the goal, and (2) the position error, which is the difference between the user’s final position and the original goal position. As seen in the figures below, the virtual spring control shows an improvement in both time and accuracy compared to free hand control.

errorplot

Experiment 2

forceplot

To analyze the force control algorithm and to assess the repeatability of creating the same soft tissue deformation, we let the US probe go up and down to the target position on its own when all virtual springs are activated. All virtual springs are enabled to ensure that there will not be major displacements in the x-y plane. This procedure is repeated 6 times and the force applied to the probe by the phantom is recorded (Figure on the left). In this plot, “Experiment Number-0” corresponds to the force applied on the treatment planning day, so this is the force that was taken as a reference. It is seen that the controller is successful in applying pressure on the phantom by an amount close to the reference value.

After having satisfactory results on the force control, the next thing is to determine if the US probe created similar soft tissue deformations. For this, the centers of each marker in the 4×4 grid are located in each of the 6 target images, as shown in the figure on the right. The error in reproducing the deformation is defined as the distance between the center of each grid marker and its corresponding center in the reference image. The results show that the mean absolute error in the x and z directions is 0:90:5[mm] and 0:30:3[mm], respectively.

expfigure

* This work is supported by NIH R01 CA161613

Publications

Sen, H. Tutkun; Cheng, Alexis; Ding, Kai; Boctor, Emad; Wong, John; Iordachita, Iulian; Kazanzides, Peter

Cooperative Control with Ultrasound Guidance for Radiation Therapy Journal Article

In: Frontiers in Robotics and AI, vol. 3, 2016.

BibTeX

Sen, H. Tutkun; Bell, Muyinatu A. Lediju; Zhang, Yin; Ding, Kai; Boctor, Emad; Wong, John; Iordachita, Iulian; Kazanzides, Peter

System Integration and In-Vivo Testing of a Robot for Ultrasound Guidance and Monitoring during Radiotherapy Journal Article

In: IEEE Trans. on Biomedical Engineering, 2016.

BibTeX

Sen, H Tutkun; Bell, Muyinatu A Lediju; Zhang, Yin; Ding, Kai; Wong, John; Iordachita, Iulian; Kazanzides, Peter

System integration and preliminary in-vivo experiments of a robot for ultrasound guidance and monitoring during radiotherapy Proceedings Article

In: IEEE Intl. Conf on Advanced Robotics (ICAR), pp. 53-59, Istanbul, Turkey, 2015.

BibTeX

Bell, Muyinatu A. Lediju; Sen, H. Tutkun; Iordachita, Iulian I.; Kazanzides, Peter; Wong, John

In vivo reproducibility of robotic probe placement for a novel US-CT image-guided radiotherapy system Journal Article

In: Journal of Medical Imaging, vol. 1, no. 2, pp. 025001.1-025001.9, 2014.

BibTeX

Bell, Muyinatu A. Lediju; Sen, H. Tutkun; Iordachita, Iulian I.; Kazanzides, Peter; Wong, John

In vivo reproducibility of robotic probe placement for an integrated US-CT image-guided radiotherapy system Proceedings Article

In: SPIE Medical Imaging, San Diego, CA, 2014.

BibTeX

Sen, H. Tutkun; Bell, Muyinatu A. Lediju; Iordachita, Iulian; Wong, John; Kazanzides, Peter

A Cooperatively Controlled Robot for Ultrasound Monitoring of Radiation Therapy Proceedings Article

In: IEEE/RSJ Intl. Conf. on Intelligent Robots and Systems (IROS), pp. 3071-3076, Tokyo, Japan, 2013.

BibTeX

Project Bibliography

[1] F. Pierrot, E. Dombre, E. Degoulange, L. Urbain, P. Caron, J. Gariepy, and J. louis Megnien. Hippocrate: A safe robot arm for medical applications with force feedback. Medical Image Analysis, 3:285–300, 1999.

[2] P. Abolmaesumi, S. E. Salcudean, W.-H. Zhu, M. R. Sirouspour, and S. P. DiMaio. Image-guided control of a robot for medical ultrasound. IEEE Trans on Robotics and Automation, 18(1):11–23, 2002.

[3] A. Vilchis, J. Troccaz, P. Cinquin, K. Masuda, and F. Pellissier. A new robot architecture for tele-echography. IEEE Trans on Robotics and Automation, 19(5):922–926, 2003.

[4] M. H. Raibert and J. J. Craig. Hybrid position/force control of manipulators. Trans. of the ASME, 102, Jun 1981.

[5] G. Fichtinger, J. Fiene, C. Kennedy, G. Kronreif, I. Iordachita, D. Song, E. Burdette, and P. Kazanzides. Robotic assistance for ultrasound-guided prostate brachytherapy. Medical Image Analysis, 12(5):535–545, Oct 2008.