KEYWORDS: 3D modeling, Convolution, Image segmentation, Tumors, 3D image processing, Education and training, Data modeling, Brain, Neuroimaging, Lawrencium
Gliomas are the primary brain tumors that are most commonly observed in adult patients and exhibit varying degrees of aggressiveness and prognosis. The accurate identification and diagnosis of Gliomas in surgical procedures heavily rely on the acquisition of precise segmentation results, which involve delineating the tumor region from magnetic resonance imaging (MRI) scans of the brain. The segmentation process in conventional 3D CNN methods is often reliant on patch processing as a result of the limitations in GPU memory. This paper presents an approach for segmenting brain tumors into distinct subregions, namely the whole tumor, tumor core, and enhancing tumor, utilizing a 3D tiled convolution-based segmentation method. The utilization of the 3DTC method enables the inclusion of larger patch sizes without requiring hardware with high GPU memory. This study presents three significant modifications to the standard 3D U-Net. Firstly, we incorporate 3D tiled convolution as the initial layer in our proposed models. Secondly, we substitute the trilinear upsampling layer with a dense upsampling convolution layer. Lastly, we replace the standard convolution block with recurrent residual blocks in the proposed R2AU-Net. The best framework was utilized to apply an average ensembling technique, aiming to achieve accurate results on the validation set of the BraTS 2020 dataset. The network proposed in this study was utilized for the analysis of the BRATS2020 dataset. The evaluation of our method on the validation dataset yielded Dice scores of 90.76%, 83.39%, and 74.77% for the whole tumor, tumor core, and enhancing tumor region, respectively.
KEYWORDS: Video, Video acceleration, 3D video streaming, 3D image processing, 3D displays, Integral imaging, RGB color model, Parallel processing, Internet, Imaging systems
We propose a novel technique to synchronize elemental images and audio signal and the transmission technique for a glass-free 3D TV system based on integral imaging in real time. The main idea behind the method is to generate real time 3D video based on elemental images synchronized with audio stream. The system uses the depth information and RGB data of per frame of a video through Intel RealSense 3D camera and the audio stream from microphone. The audio file is sampled according to per frame of the video and kept in different buffers but having same index. The frames are divided into elemental images using Elemental Image Generation algorithm and the audio data is synchronized according to the index. Then the stream of elemental images and corresponding audio data is transmitted to data server. The display device fetches and decodes this data to produce video which is viewed as 3D using multi-array of lenses.
Coronary artery blockage is a vital issue of occurring heart attack. There are several techniques to diagnose coronary artery blockage as well as other type heart diseases. In this paper, we discuss about computerized full automated model for the detection of coronary artery blockage using image processing techniques so that the system does not have to rely on human’s inspection. Using efficient image processing technique and AI algorithms, the system allows a faster and reliable detection of the narrowing area of the wall of coronary arteries due to the condensation of different artery blocking agents. The system requires a 64-slice/128-slice CTA image as input. After the acquisition of the desired input image, it goes through several steps to determine the region of interest. This research proposes a two stage approach that includes the pre-processing stage and decision stage. The pre-processing stage involves common image processing strategies while the decision stage involves the extraction and calculation of features to finally determine the intended result using AI algorithms. This type of model effectively enables early detection of coronary artery blockage through segmentation, quantification, identification of degree of blockage and risk factors of heart attack.
KEYWORDS: Video, 3D video streaming, 3D image processing, Integral imaging, 3D displays, RGB color model, Video acceleration, Internet, Imaging systems, Glasses
We propose a novel technique to synchronize elemental images and audio signal and the transmission technique for a glass-free 3D TV system based on integral imaging. The main idea behind the method is to generate 3D video based on elemental images synchronized with audio stream. The system uses the depth information and RGB data of per frame of a video through Intel RealSense 3D camera and the audio stream from microphone. The audio file is sampled according to per frame duration of the video and kept in different buffers but having same index. The frames are divided into elemental images using Elemental Image Generation algorithm and the audio signal is synchronized according to the index. Then the stream of elemental images and corresponding audio data is transmitted to data server for storage. HLS streaming protocol is used to stream the TV content. A dedicated web application was made that fetches data from the server and plays video on the user end display device. By using array of micro-lenses in front of display, the video is viewed as three-dimensional with the help of integral imaging technology that omits the need of wearing 3D glasses.
KEYWORDS: Imaging systems, Integral imaging, Cameras, 3D image processing, Computing systems, 3D displays, Parallel processing, Image processing, Data acquisition, Graphics processing units
An improved and efficient system for faster computation of Elemental Image generation for real time integral imaging 3D display system with the assistance of Graphics Processing Unit parallel processing is proposed. Previously implemented systems for real time integral imaging system had a resulting frame rate greater than 30 fps for elemental image generation. But this improved and more efficient system is able to produce elemental image at a rate greater than 65 fps for real time integral imaging system. Our proposed model consists of the following steps: information acquisition of objects in real time using Kinect sensor, generation of elemental image sets using pixel mapping algorithm using GPU parallel processing for faster generation. To implement this system, firstly the color (RGB) and depth information data of each object point is acquired from the depth camera (Kinect sensor). Using acquired information, we create the elemental image sets using pixel mapping algorithm. And finally we implemented the pixel mapping algorithm in GPU and hence the overall computational speed of the real time integral display system increased surprisingly. This remarkable increase in speed for elemental image generation opens up new field of possibilities for improvement in integral imaging technology i.e. merging this system with multi-directional projection for real time integral imaging system can enhance the viewing angle remarkably and so on.
This proposed method aims towards a full automation of the detection of coronary artery blockage through some image processing techniques so that the system does not have to rely on human's inspection. The goal of the research is to implement the proposed image processing techniques so the system can detect the narrowing area of the wall of coronary arteries due to the condensation of different artery blocking agents. The research suggests that the system will require a 64-slice CTA image as input. After the acquisition of the desired input image, it will go through several steps to determine the region of interest. This research proposes a two stage approach that includes the preprocessing stage and decision stage. The pre-processing stage involves common image processing strategies while the decision stage involves the extraction and calculation of two feature ratios to finally determine the intended result. In order to get more insights of the subject of these examinations, this research has proposed the use of an algorithm to create a 3-D model.
KEYWORDS: 3D image reconstruction, 3D image processing, Imaging systems, Image enhancement, Integral imaging, Parallel processing, 3D image enhancement, 3D acquisition, Image processing, Reconstruction algorithms, 3D displays, Cameras
A novel method of viewing angle enhancement of a real-time integral imaging system using multi-directional projections and GPU parallel processing is proposed. The proposed system is composed of three processes: information acquisition of real objects, generation of multi-directional elemental image sets, and reconstruction of 3D images by using multidirectional projections scheme. To implement this system, depth and color (RGB) information of each object point are captured by a depth camera; then, a dynamic algorithm and GPU parallel processing are used for generating multidirectional elemental image sets to be illuminated in different directions as well as to maintain a real-time processing seed; and finally, 3D images are reconstructed by using a time-multiplexed multi-directional projection scheme through an appropriate optical setup of a projection-type integral image system. Multi-directional illuminations of elemental image sets enhance the optical ray divergence of reconstructed 3D images according to the directional projection angles. Hence, a real-time integral imaging system with enhanced viewing angle is achieved.
In this paper we present the method for fast computer generation hologram (CGH) of the long depth object using multiple wavefront recording planes (WRP). The wavefront recording planes are placed between object plane and hologram plane. Each WRP records the wavefront from a section of object. For a long depth object, multiple WRPs can reduce the calculation time and also enhance the quality of reconstruction object in comparison with those ones of single WRP. The hologram of object can be real time generated by out proposed method.
Viewing angle enhanced integral imaging (II) system using multi-directional projections and elemental image (EI) resizing method is proposed. In this method, each elemental lens of micro lens array collects multi-directional illuminations of multiple EI sets and produces multiple point light sources (PLSs) at the different positions in the focal plane; and the positions of the PLSs can be controlled by the projection angles. The viewing zone is made consisting of multiple diverging ray bundles, wider than the conventional method, due to multi-directional projections of multiple EI sets; whereas a conventional system produces a viewing zone using only a single set of EI projection. Hence the viewing angle of the reconstructed image is enhanced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.