The new equipment will be part of the program group
Biomedical image and signal analysis in the Laboratory of Imaging Technologies (LIT), and will enable new and in-depth research in the field of analysis of large amounts of medical data. For example, to analyze the database
UK BioBank , which includes 150 TB of medical data for more than 50 thousand people, such as electronic medical records, MRI imaging examinations of the head and heart, etc. With new equipment and the most advanced deep learning techniques, we develop and evaluate diagnostic and prognostic predictive models for a wide variety of medical conditions and pathologies, both based on MRI images and unstructured data. We focused on predictive models for neurological, cardiovascular and musculoskeletal diseases and various types of cancer.
An alternative to new equipment is the co-use of domestic supercomputer networks, such as
SLING and services of cloud service providers abroad (Amazon AWS, Google, Oracle, etc.). However, the transfer and storage of medical data to a remote computer center or cloud is neither secure nor economically justified due to security risks in the management of personal data and contractual restrictions, as well as due to the large amount of data.
Deep machine learning methods are computationally intensive and demonstrate capabilities that are approximately directly proportional to the amount of free parameters (e.g. the number of layers of the neural network) and the amount of processed training data. The analysis of multidimensional structured data such as 3D or 4D medical images is particularly demanding from the computational and memory perspective. Massive parallel computer systems are used for this purpose. Such systems are effective for the analysis of 3D or 4D medical images only if they are associated with large amounts of working memory, since the convergence of learning predictive models from such images is critically determined by the selected number of model layers (depth) and the number of samples in the package (batch size). The typical size of the working memory of graphics processing units, which are often used for these purposes, is up to 24 GB. The distribution of learning on several (parallel) processing units can be programmatically demanding, and its efficiency is limited by the speed of the standard communication bus. Manufacturers of massive parallel computer systems have therefore recently developed high-speed communication buses that enable individual parallel processing units to be combined into clusters that act as one parallel processing unit. The working memory associated with each processing unit is also connected via the same high-speed bus, which seemingly increases the working memory of such a system. The aforementioned properties of such systems therefore enable an efficient analysis of large amounts of data using deep machine learning methods. The new hardware enables the virtual working memory of graphics processing units in a total capacity of 320 GB. The equipment was delivered in October 2022 and was already fully integrated into ongoing research in November 2022.