Translate this page into:
Technical review of a clinician-driven low-code workflow for anatomical segmentation in radiologic imaging
*Corresponding author: Dr. Ankush Ankush, Department of Radio-diagnosis, LN Medical College & JK Hospital, Kolar Road, Bhopal, Madhya Pradesh, 462042, India. ankush@drankush.com
-
Received: ,
Accepted: ,
How to cite this article: Ankush A, Lal B, Burman S, Nagayach O, Sardessai S. Technical review of a clinician-driven low-code workflow for anatomical segmentation in radiologic imaging. Future Health. 2025;3:215-22. doi: 10.25259/FH_24_2025
Abstract
Artificial intelligence (AI) holds immense promise for enhancing medical imaging analysis, particularly in the realm of anatomical segmentation. However, the technical complexities of developing and deploying AI models, often requiring substantial coding expertise, have traditionally posed a barrier to entry for many clinicians. This review explores the emerging landscape of low-code (LC) AI solutions in medical imaging, focusing on their potential to empower radiologists and other healthcare professionals to actively participate in AI development. We examine a practical workflow using the fastMONAI library, an LC extension of established tools like MONAI and fastai, demonstrating how clinicians can train a U-Net-based model for cardiac MRI segmentation with minimal coding. This approach significantly reduces the technical overhead, enabling clinicians to focus on clinically relevant aspects of model development and customization. The review highlights the benefits of LC AI in fostering a more inclusive and collaborative environment for AI innovation in radiology, while also acknowledging the potential limitations and considerations for successful implementation.
Keywords
Artificial intelligence
Cardiac MRI
Low-code library
Medical imaging
Segmentation models
U-net
INTRODUCTION
The application of artificial intelligence (AI) in radiology is rapidly expanding, offering the potential to improve diagnostic accuracy, efficiency, and patient care.1 One key area of AI application is anatomical segmentation, the automated delineation of anatomical structures in medical images. Accurate segmentation is crucial for a variety of clinical tasks, including volumetric measurements, treatment planning, and disease monitoring. While traditional manual segmentation is time-consuming and subject to inter-observer variability, deep learning-based approaches, particularly convolutional neural networks (CNNs), have demonstrated remarkable performance in automating this process.2
However, the development and deployment of these AI models typically require significant programming expertise, creating a substantial barrier for many clinicians who lack formal training in computer science. This “coding barrier” can limit the ability of radiologists to directly contribute to the development of AI tools tailored to their specific clinical needs and departmental workflows. It also restricts their capacity to evaluate and adapt existing AI solutions critically.1
To address this challenge, the concept of low-code (LC) and no-code (NC) AI development has emerged. These platforms aim to democratize AI development by providing intuitive, user-friendly interfaces that minimize or eliminate the need for traditional coding. This allows individuals with limited programming experience, including clinicians, to build, train, and deploy AI models, fostering greater participation and innovation in the field.3
This review article aims to provide clinicians with an overview of LC AI approaches for anatomical segmentation in medical imaging. We focus on a practical workflow using the fastMONAI library, a LC tool designed specifically for medical image analysis.4 We illustrate how a simplified-six-step workflow can be used to train a U-Net-based model for binary segmentation of the left atrium in cardiac Magnetic Resonance Imaging (MRI) scans, demonstrating the ease of use and potential for clinician-led AI development. The applications of such a task help in planning and guiding catheter ablation procedures and even potentially as an index of cardiovascular risk and disease burden.5
LOW-CODE SOLUTION FOR RADIOLOGIC SEGMENTATION
The low-code library chosen for this technical review was fastMONAI, which builds upon state-of-the-art Python libraries including fastai, MONAI (Medical Open Network for AI), TorchIO, and ImageData.6-9 fastMONAI offers high-level widgets, pre-trained models, and customizable pipelines for common medical use cases - covering 2D and 3D analysis for modalities like MRI and CT. This accelerates product development while also giving flexibility to tweak parameters when required. Individual components have specific purposes like data ingestion, preprocessing, modelling, and visualization - orchestrated via high-level method calls. Developed within the Mohn Medical Imaging and Visualization Centre (MMIV) at Haukeland University Hospital’s Department of Radiology, fastMONAI benefits from direct insights and feedback from medical imaging experts.4 One can access the library at https://github.com/MMIV-ML/fastMONAI/. While fastMONAI simplifies many AI aspects, a structured and intuitive workflow remains integral to success. Our simplified six-step workflow includes preparing data, datablock, learner, followed by training, evaluating with test data, and exporting for deployment. Figure 1 depicts the flowchart of our simplified data-to-model pipeline.

- Simplified six-step data-to-model pipeline.
We present a binary semantic segmentation pipeline for left atrium in cardiac MRI scans, using data from the Medical Segmentation Decathlon challenge by King’s College London (http://medicaldecathlon.com/), which contains scans from 20 patients with all patient-identifying metadata removed.10 The dataset’s limited size and variability increase segmentation complexity. It’s important to note that this benchmark dataset may not fully represent real-world data complexities influenced by factors like demographics, image acquisition, disease spectrum and lack the demographic and scanner diversity found in clinical practice, and therefore models trained solely on this data may harbour inherent biases.
Readers are encouraged to run the provided Jupyter Notebook as Supplementary material [see Supplementary Data, Section 1.1] either on Google Colaboratory (Colab) or a local Jupyter instance, following the code cells’ comprehensively explained steps and comments. The default GPU (Graphics processing unit) for Colab is a NVIDIA Tesla K80 with 12GB of VRAM (Video Random-Access Memory). Its GPU runtime comes with an Intel Xeon CPU @2.20 GHz, 13 GB RAM.
TECHNICAL IMPLEMENTATION AND WORKFLOW ANALYSIS
The initial step of our study involved setting up the appropriate environment for our work. We began by installing fastMONAI and proceeded to import the necessary libraries from fastMONAI into our working environment. In order to ensure optimal performance for our tasks, we utilized Google Colab’s GPU as our hardware accelerator.
Step I: Preparing the data
FastMONAI offers the DecathlonDataset for direct data download and item generation. The images in this dataset are provided in the Neuroimaging Informatics Technology Initiative (NIfTI) format (.nii.gz). While clinical images are typically stored and transferred in the Digital Imaging and Communications in Medicine (DICOM) format, the NIfTI format is commonly used in research for its convenience in analysis and processing. To enable users to examine their own datasets, we demonstrate how to download and investigate files, particularly the JSON (JavaScript Object Notation) file. The JSON structure here consisted of two main components: image and label. The JSON contains image and label information, with labels distinguishing background (0) from left atrium (1), crucial for semantic segmentation models in medical imaging.
Further, we created a Pandas DataFrame with complete file paths for images and labels, facilitating data manipulation and integration into machine learning models. Using sklearn’s ‘train_test_split’ function, we divide the data into training (90%) and test (10%) sets. This split ensured that all images from a single patient were assigned exclusively to either the training or the test set to prevent data leakage and provide a robust evaluation of the model’s generalization capabilities.
STEP II: Preparing the datablock and loading the data
We enhanced data diversity using preprocessing and augmentation techniques. The MedDataset class analyzes label distribution, providing data dimensions, voxel size, orientation, and counts. We defined resampling, reordering, and image size standardization. Data augmentation included random rotation and normalization, improving model generalization. This step is crucial for significantly improving the model’s ability to generalize to unseen data, enriching the overall training process.
fastMONAI provides ‘MedDataBlock’ container to quickly build dataloaders. which formats data for neural network training, specifying input/output blocks (images/labels), splitting strategies (for validation sets), and augmentation methods. We defined the batch size, specifying the number of training examples processed in one iteration. A smaller batch size, such as 4, offers advantages like enhanced memory efficiency. For efficient data loading, we use random shuffling before creating mini-batches each epoch. FastMONAI’s DataLoader class automates shuffling and mini-batch collation, streamlining the data handling workflow.
STEP III: Preparing the learner
A Learner in deep learning consolidates essential components for model training: dataloader, model architecture, loss function, optimizer, and metrics.
We imported the U-Net model from MONAI for our segmentation task. U-Net excels in biomedical image segmentation by capturing local features (such as object shape or texture) and global features (such as the object’s position within the image).11 We initialized U-Net for 3D MRI images with three spatial dimensions, one input channel, and one output channel for binary segmentation. The architecture includes encoder pathway (contracting path) and a decoder pathway (expansive path), with the ‘channels’ parameter dictating feature map presence at each level during encoding and decoding.12 The ‘stride’ parameter controls down-sampling by determining the shift of the filter/kernel over input data during convolution operations. The number of residual units aids in learning complex patterns by facilitating direct gradient flow between layers, enhancing convergence during training.
We used DiceLoss from MONAI as our loss function, measuring overlap between predicted and target segmentation masks. The ‘sigmoid’ parameter allows interpretation as probabilities. Setting the ‘sigmoid’ parameter to True applies a sigmoid activation function to the model’s output before calculating the Dice Loss, allowing interpretation as probabilities or confidence scores. The Dice loss ranges from 0 (perfect overlap) to 1 (no overlap). It is defined as 1 - Dice coefficient.
Our optimizer function was set to ‘ranger’, an optimization algorithm combining RAdam and LookAhead mechanisms. Ranger is lauded for its ability to achieve rapid convergence and robust performance by adjusting neural network parameters such as weights and learning rate to minimize losses.13 For evaluating model performance during training or validation, the binary dice score was chosen as metric. It measures the overlap between predicted and actual binary masks, with higher scores indicating better alignment between predictions and ground truth labels.
This Learner instance efficiently manages tasks such as forwarding input data through network layers, calculating losses based on predictions versus actual labels, adjusting network parameters using the specified optimizer based on calculated losses, and tracking progress using specified metrics.
STEP IV: Training
Before training, it is crucial to find the optimal learning rate. The learn.lr_find() method helps identify this rate by training the model at incrementally increasing rates, plotting losses against learning rates on a logarithmic scale. As the learning rate increases, the model learns faster, reducing loss. However, an excessively high learning rate can cause overshooting or divergence, resulting in increased losses. This process yielded a plot with a distinct shape, highlighting a value (10-2) where the model learned efficiently without divergence or overshooting [Figure 2].

- Determination of optimal learning rate.
We initiated training using the fit_flat_cos method, employing ‘Flat and Cosine Annealing’ scheduling. This maintains a constant learning rate initially, then gradually reduces it following a cosine schedule. This strategy aids in fine-tuning parameters and achieving optimal solutions without excessive oscillations. Furthermore, to iteratively refine its parameters and converge towards optimal performance, we trained for 200 epochs (complete passes through our dataset) at the designated learning rate.
To visualize our model’s training process, we utilized the `learn.recorder.plot_loss()` function. This function leverages the recorder attribute within a Learner object, which tracks various metrics throughout training, including losses at each step or epoch. Insights gained from these plots include:
If both training and validation losses decrease, it indicates continual improvement on both seen (training) and unseen (validation) data.
If training loss decreases while validation loss increases, it suggests potential overfitting to training data—improved performance on seen data but worse on unseen data.
If both losses remain high or do not decrease, the model might be underfitting, requiring adjustments such as a more complex architecture or an enhanced training strategy.
Alternatively, ‘matplotlib.pyplot’ can be used to plot training, validation losses, and dice scores from ‘learn.recorder.values’ [Figure 3].

- Visualization of metrics obtained after training.
After training, we saved the model using ‘learn.save()’ to preserve the learner’s internal state, including acquired parameters. This step conserves time and computational resources invested in training. For visual inspection, we used ‘learn.show_results()’ to generate images comparing model predictions with ground truth for validation set samples. This visual inspection provides insights into potential shortcomings in specific image types or data patterns.
STEP V: Evaluate model with test data
To see how well our model has been training, we now evaluated it with our test data. For evaluation, the test data generated during the splitting process in step I was utilized. To facilitate this, a dataloader named ‘test_dl’ was created for the test data DataFrame, `test_df`.
For numerical evaluation, the Dice score was employed as a metric. The model’s predictions were compared against the expert-provided segmentation masks from the public dataset, which served as the gold standard. This yielded a Dice score of approximately 0.9232, indicating a significant overlap between the predicted and actual segmented areas. This score indicated a significant overlap between the predicted and actual segmented areas. Furthermore, the `learn.show_results` method was employed to visually present the model’s outcomes on the test data at specific anatomical planes [Figure 4]. This visual inspection of the final model’s performance on unseen test data is the standard for evaluation and is representative of similar validation checks performed throughout the training process using the same function on the validation set. This dual approach, integrating both quantitative and visual assessments, supplied valuable insights into the model’s efficacy, allowing for a comprehensive evaluation of its performance.

- Prediction on test images with segmentation of the left atrium by the trained model.
STEP VI: Export and deploy
We saved the crucial variables related to data processing and model configuration into a pickle file. Furthermore, the trained model itself was exported using the learn.export(‘heart_model.pkl’) method. This process encapsulated everything about the learner object, not just the model parameters, but also architectural details. The resultant ‘heart_model.pkl’ file serves as a comprehensive package for deployment. We deployed our trained model using Gradio on Hugging Face, which can be publicly accessed at https://drankush-ai-left-atrium-heart-segmentation.hf.space/ [Figure 5].

- Deployment using Gradio for demonstration, can be accessed at https://drankush-ai-left-atrium-heart-segmentation.hf.space/.
COMPARATIVE ANALYSIS OF LOW-CODE VERSUS TRADITIONAL WORKFLOWS
To further explore the efficiency of our low-code library, fastMONAI, we also trained a model on similar data, employing similar preprocessing steps and a U-net architecture using the MONAI library [see Supplementary Data, Section 1.2]. The code from both libraries, involving all steps, was uniformly styled employing a standard style guide for Python coding14 and excluding comments, blank lines, and organizational text typically found in Jupyter notebooks, ensuring a fair comparison of actual functional code. We observed a reduction of 75.08% (244 lines) of code [see Supplementary Data, Section 1.3]. This demonstrates the substantial reduction in coding complexity and effort with fastMONAI, affirming its effectiveness as a low-code library.
EMERGING TRENDS IN LOW-CODE/NO-CODE APPLICATIONS FOR DEEP LEARNING MODEL TRAINING
Democratizing deep learning education and research
A European survey of 1041 radiologists and residents revealed that limited AI expertise correlated with fear of AI replacing roles and less proactive attitudes. This suggests that unfamiliarity with AI limitations may lead to overestimation of its threat and undervaluation of one’s role in AI integration. Conversely, advanced AI knowledge fostered a more open stance.15 The survey identified a lack of knowledge as a significant hurdle in AI implementation, with 79% supporting AI education in radiology residency programs.16 Recent discussions emphasize the need for AI curricula that enable students to apply machine learning without delving into complex algorithms.17 Richardson and Odeja’s curriculum for teaching deep learning to radiology residents using a No-code system demonstrated effectiveness in conveying insights and increasing interest.18 The advantages and challenges of employing Low-Code/No-code (LCNC) AI for teaching machine learning in higher education are increasingly recognized.19
There is scarcity of beginner-friendly literature on AI model training in radiology partly also due to rapid advancements rendering libraries obsolete. For instance, the popular “Magician’s Corner” tutorial series, designed to teach U-net model building for image segmentation, now encounters errors due to Google Colab no longer supporting TensorFlow 1.20 Our Simplified-six-step workflow based on an actively maintained LC library offers a structured pathway for efficient foundational model preparation for radiologists and residents interested in learning model training through a hands-on approach. Through a series of intuitive function calls and concise steps, this workflow navigates beginners through data handling, augmentation, loss function selection, model architecture utilization, hyperparameter tuning, and training monitoring. These sequence of steps provide an iterative process in a human-centric manner by being more balanced in learning curve and intuitively explaining the design choice.21 This streamlined process allows professionals to focus on refining models rather than wrestling with intricate coding, thereby significantly enhancing overall efficiency. Customized LCNC AI platforms in health informatics can alleviate technical knowledge challenges in various domains and democratizing AI access for non-coders.22,23 Apart from our case study of segmentation of left atrium, fastMONAI has been successfully used for tumor segmentation in cervical cancer, endometrial cancer, spine segmentation and pulmonary nodule classification.24-27
Furthermore, this approach also cultivates continuous learning opportunities within medical organizations. Enabling non-technical team members to engage in AI model development fosters a culture of interdisciplinary collaboration.28,29 LC also reduces the occurrence of unpredictable or inconsistent requirements, making it easier to construct initial prototypes for requirement validation. This helps prevent resource wastage on unnecessary features or functionalities and avoids unnecessary development cycles.30
Comparative advantages and limitations
Although LCNC solutions could present generalizable workflow for training a model, the foundation of training an effective and practical generalizable model however rests upon the quality of input data; requiring a diverse set of MRI images encompassing various demographics, pathologies, and image qualities necessitating meticulous preprocessing to address inconsistencies and artifacts, ensuring the model’s ability to generalize effectively across varied scenarios [Table 1].31 This technical review focuses on the feasibility and efficiency of the low-code training workflow itself. A full clinical validation was beyond the scope of this paper. Before any clinical deployment, a crucial next step would be a rigorous co-evaluation of the model’s output with a team of expert clinicians and a statistical assessment against manual segmentations to account for inter- and intra-rater variability. LC also presents challenges in terms of scalability, as it may enable project-based solutions but might not materialize at the enterprise level.30 Although LC libraries provide greater room for adjustment compared to No-code libraries, a key trade-off exists. These platforms prioritize ease-of-use and rapid development, which may come at the cost of the fine-grained control needed for highly novel research. For example, designing a completely new neural network layer or a bespoke loss function from first principles would be difficult or impossible. Therefore, LC solutions are ideal for empowering clinicians to apply established, state-of-the-art architectures to new clinical problems, while traditional, code-intensive workflows remain essential for pushing the boundaries of machine learning research.
| Best practice | Explanation |
|---|---|
| Data quality |
|
| Data augmentation |
|
| Loss function |
|
| Model architecture |
|
| Hyperparameter tuning |
|
| Training monitoring |
|
| Evaluate on a diverse test set |
|
While this review focuses on the U-Net architecture due to its widespread adoption, proven efficacy, and straightforward implementation within LCNC frameworks, it is important to acknowledge recent advancements in model architectures. Newer models, Beyond convolutional baselines, transformer-based architectures such as Vision Transformers and hybrid variants (e.g., TransUNet, Swin-UNet) have advanced performance in medical segmentation by capturing long-range dependencies and global context. In parallel, foundation models like the Segment Anything Model (SAM) enable promptable, zero-/few-shot segmentation and can accelerate annotation, though careful domain adaptation is needed for grayscale modalities, small structures, and domain shift.32
Regulatory and privacy considerations
It is critical to address the regulatory landscape surrounding AI in medicine. Any software tool intended for clinical diagnosis or treatment planning is considered a medical device and is subject to rigorous oversight and approval from regulatory bodies like the U.S. Food and Drug Administration (FDA) or to receive a CE mark in Europe. While low-code platforms are excellent for research, rapid prototyping, and developing institution-specific, non-diagnostic tools, any model intended for widespread clinical use must undergo a formal validation and documentation process that meets these stringent regulatory standards. This remains true regardless of the platform it was built on. Developers must also ensure that all aspects of data handling, from training to deployment, are compliant with data privacy regulations such as HIPAA.
Future development
Research efforts have been initiated towards the development of a semi-automatic annotation loop for fastMONAI with an active learning pipeline by providing expert annotators with automatically segmented lesion/target instances for refinement.4 Future endeavours also seek to expand on PACS (Picture Archiving Communications System) integration and to provide more detailed and extensive documentation.
CONCLUSION
Low-code and no-code platforms can potentially reshape the landscape of deep learning in radiologic research by making advanced AI techniques accessible to clinicians with limited coding expertise. The comprehensive review of a fastMONAI-based six-step workflow demonstrates that clinician-driven anatomical segmentation can achieve high performance while significantly reducing development complexity. The ability to rapidly prototype, evaluate, and deploy models not only accelerates clinical research but also fosters a collaborative environment where multidisciplinary teams can innovate together.
The review also highlights emerging trends in LC/NC applications for deep learning model training, emphasizing their role in educational settings and clinical practice. While challenges related to customization and scalability persist, the advantages of reduced development time, enhanced reproducibility, and improved accessibility underscore the transformative potential of LC/NC platforms in medical imaging. Continued advancements in this field, alongside rigorous validation using diverse datasets, will be crucial for realizing the full potential of AI-driven clinical applications.
Acknowledgement
The authors would like to thank Sathiesh Kaliyugarasan, the creator of fastMONAI, for his initial guidance in developing this model for binary segmentation.
Author contributions
AA: Conceptualized and designed the study, carried out the software implementation, and wrote the manuscript; BL and SB: Contributed to the investigation and manuscript revision; ON: Responsible for software deployment; SS: Contributed to manuscript revision; All authors read and approved the final manuscript.
Ethical approval
Institutional Review Board approval is not required.
Declaration of patient consent
Patient’s consent not required as there are no patients in this study.
Financial support and sponsorship
Nil.
Conflicts of interest
There are no conflicts of interest.
Use of artificial intelligence (AI)-assisted technology for manuscript preparation
The authors confirm that there was no use of artificial intelligence (AI)-assisted technology for assisting in the writing or editing of the manuscript, and no images were manipulated using AI.
References
- Beyond mathematics, statistics, and programming: Data science, machine learning, and artificial intelligence competencies and curricula for clinicians, informaticians, science journalists, and researchers. Health Syst (Basingstoke). 2023;12:255-63.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Machine learning techniques for biomedical image segmentation: An overview of technical aspects and introduction to state-of-art applications. Med Phys. 2020;47:e148-67.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Democratizing artificial intelligence: How no-code AI can leverage machine learning operations. Business Horizons. 2023;66:777-88.
- [Google Scholar]
- fastMONAI: A low-code deep learning library for medical image analysis. Software Impacts. 2023;18:100583.
- [CrossRef] [Google Scholar]
- Left atrial volume as a morphophysiologic expression of left ventricular diastolic dysfunction and relation to cardiovascular risk burden. Am J Cardiol. 2002;90:1284-9.
- [CrossRef] [PubMed] [Google Scholar]
- MONAI Consortium. MONAI: Medical Open Network for AI. Version 0.9.0, Zenodo 2022.
- TorchIO: A python library for efficient loading, preprocessing, augmentation and patch-based sampling of medical images in deep learning. Comput Methods Programs Biomed. 2021;208:106236.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Imagedata: A python library to handle medical image\ndata in NumPy array subclass Series. JOSS. 2022;7:4133.
- [Google Scholar]
- The medical segmentation decathlon. Nat Commun. 2022;13:4128.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Deep learning: An update for radiologists. Radiographics. 2021;41:1427-45.
- [CrossRef] [PubMed] [Google Scholar]
- U-Net: Convolutional networks for biomedical image segmentation. In: Lecture notes in computer science, Medical image computing and computer-assisted intervention – MICCAI 2015 Lecture notes in computer science, Medical image computing and computer-assisted intervention – MICCAI 2015. Springer International Publishing; 2015. p. :234-41.
- [Google Scholar]
- Wright L, Demeure N. Ranger21: A synergistic deep learning optimizer. 2021. Available from: http://arxiv.org/abs/2106.13731. [Last accessed 2024 Jan 23].
- van Rossum G, Warsaw B. Style Guide for Python. In: Pro Python Pro Python: Apress; 2010; pp. 283-97
- An international survey on AI in radiology in 1,041 radiologists and radiology residents part 1: Fear of replacement, knowledge, and attitude. Eur Radiol. 2021;31:7058-66.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- An international survey on AI in radiology in 1041 radiologists and radiology residents part 2: Expectations, hurdles to implementation, and education. Eur Radiol. 2021;31:8797-806.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Current and future artificial intelligence (AI) curriculum in business school: A text mining analysis. J Inf Syst Educ. 2022;33:416-426.
- [Google Scholar]
- A “Bumper-Car” Curriculum for teaching deep learning to radiology residents*. Acad Radiol. 2022;29:763-70.
- [CrossRef] [PubMed] [Google Scholar]
- Teaching tip: Using no-code AI to teach machine learning in higher education. J Inf Syst Educ. 2024;35 Available from: https://urn.kb.se/resolve?urn=urn:nbn:se:umu:diva-207861. [Last accessed 2024 Jan 18]
- [Google Scholar]
- Magician’s corner: 4 Image segmentation with U-net. radiology: Artif Intell. 2020;2:e190161.
- [Google Scholar]
- How can no/low code platforms help end-users develop ML applications? - A systematic review. In: Lecture notes in computer science, HCI International 2022 – Late breaking papers: Interacting with eXtended reality and artificial Intelligence Lecture notes in computer science, HCI International 2022 – Late breaking papers: Interacting with eXtended reality and artificial intelligence. Springer Nature Switzerland; 2022. p. :338-56.
- [Google Scholar]
- Low-code/no-code artificial intelligence platforms for the health informatics domain. Electron Commun EASST. 2023;82 doi:10.14279/tuj.eceasst.82.1221.1140
- [Google Scholar]
- GaNDLF: The generally nuanced deep learning framework for scalable end-to-end clinical workflows. Commun Eng. 2023;2
- [Google Scholar]
- Automated segmentation of endometrial cancer on MR images using deep learning. Sci Rep. 2021;11:179.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Fully automatic whole-volume tumor segmentation in cervical cancer. Cancers (Basel). 2022;14:2372.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Multi-center CNN-based spine segmentation from T2W MRI using small amounts of data. In: 2023 IEEE 20th International Symposium on Biomedical Imaging (ISBI). Cartagena, Colombia: IEEE; 2023. p. :1-5.
- [Google Scholar]
- Pulmonary nodule classification in lung cancer from 3D thoracic CT scans using fastai and MONAI. IJIMAI. 2021;6:83-9.
- [CrossRef] [Google Scholar]
- Artificial intelligence in radiology: Relevance of collaborative work between radiologists and engineers for building a multidisciplinary team. Clin Radiol. 2021;76:317-24.
- [CrossRef] [PubMed] [Google Scholar]
- Artificial intelligence and multidisciplinary team meetings; a communication challenge for radiologists’ sense of agency and position as spider in a web? Eur J Radiol. 2022;155:110231.
- [CrossRef] [PubMed] [Google Scholar]
- Benefits and limitations of using low-code development to support digitalization in the construction industry. Automation Construction. 2023;152:104909.
- [CrossRef] [Google Scholar]
- Preparing medical imaging data for machine learning. Radiology. 2020;295:4-15.
- [CrossRef] [PubMed] [PubMed Central] [Google Scholar]
- Medical SAM adapter: Adapting segment anything model for medical image segmentation. Med Image Anal. 2025;102:103547.
- [CrossRef] [PubMed] [Google Scholar]

