Medicine Drug Name Detection Based Object Recognition Using Augmented Reality - medtigo


Medicine Drug Name Detection Based Object Recognition Using Augmented Reality

Share on facebook
Share on twitter
Share on linkedin
Share on whatsapp

Front Public Health. 2022 Apr 29;10:881701. doi: 10.3389/fpubh.2022.881701. eCollection 2022.


Augmented Reality (AR) is an innovation that empowers us in coordinating computerized data into the client’s real-world space. It offers an advanced and progressive methodology for medicines, providing medication training. AR aids in surgery planning, and patient therapy discloses complex medical circumstances to patients and their family members. With accelerated upgrades in innovation, the ever-increasing number of medical records get accessible, which contain a lot of sensitive medical data, similar to medical substances and relations between them. To exploit the clinical texts in these data, it is important to separate significant data from these texts. Drugs, along with some kind of the fundamental clinical components, additionally should be perceived. Drug name recognition (DNR) tries to recognize drugs specified in unstructured clinical texts and order them into predefined classifications, which is utilized to deliver a connected 3D model inside the present reality client space. This work shows the utilization of AR to give an active and visual representation of data about medicines and their applications. The proposed method is a mobile application that uses a native camera and optical character recognition algorithm (OCR) to extract the text on the medicines. The extracted text is over and above processed using natural language processing (NLP) tools which are then used to identify the generic name and category of the drug using the dedicated DNR database. The database used for the system is scraped using various resources of medical studies and is named a medi-drug database from a development standpoint. 3D model prepared particularly for the drug is then presented in AR using ArCore. The results obtained are encouraging. The proposed method can detect the text with an average time of 0.005 s and can produce the visual representation of the output with an average time of 1.5 s.

PMID:35570914 | PMC:PMC9102603 | DOI:10.3389/fpubh.2022.881701