Using AI Large Language Models To Assess Dental History In Systemic Conditions
Published in Computational Sciences, Biomedical Research, and General & Internal Medicine

This study originated from a student-driven research initiative at
Istanbul Kent University Faculty of Dentistry. Conducted within the Student Research Club, it combines voluntary student research with academic mentorship, focusing on applications of artificial intelligence in dentistry.
This study has a genuinely compelling story behind it.
At Istanbul Kent University Faculty of Dentistry, we have a Student Research Club where students voluntarily conduct research and present their work each year at our annual Student Congress. The students involved in this study are proud members of this club. A portion of their work was first presented at the Meeting of the Association for Dental Education in Europe.
Following this experience, we continued the journey together and ultimately published the study. Today, we are delighted to share that it has been published in the journal Discover Artificial Intelligence.
Follow the Topic
-
Discover Artificial Intelligence
This is a transdisciplinary, international journal that publishes papers on all aspects of the theory, the methodology and the applications of artificial intelligence (AI).
Your space to connect: The Forensic dentistry Hub
A new Communities’ space to connect, collaborate, and explore research on Dentistry and Forensic Medicine!
Continue reading announcementRelated Collections
With Collections, you can get published faster and increase your visibility.
Enhancing Trust in Healthcare: Implementing Explainable AI
Healthcare increasingly relies on Artificial Intelligence (AI) to assist in various tasks, including decision-making, diagnosis, and treatment planning. However, integrating AI into healthcare presents challenges. These are primarily related to enhancing trust in its trustworthiness, which encompasses aspects such as transparency, fairness, privacy, safety, accountability, and effectiveness. Patients, doctors, stakeholders, and society need to have confidence in the ability of AI systems to deliver trustworthy healthcare. Explainable AI (XAI) is a critical tool that provides insights into AI decisions, making them more comprehensible (i.e., explainable/interpretable) and thus contributing to their trustworthiness. This topical collection explores the contribution of XAI in ensuring the trustworthiness of healthcare AI and enhancing the trust of all involved parties. In particular, the topical collection seeks to investigate the impact of trustworthiness on patient acceptance, clinician adoption, and system effectiveness. It also delves into recent advancements in making healthcare AI decisions trustworthy, especially in complex scenarios. Furthermore, it underscores the real-world applications of XAI in healthcare and addresses ethical considerations tied to diverse aspects such as transparency, fairness, and accountability.
We invite contributions to research into the theoretical underpinnings of XAI in healthcare and its applications. Specifically, we solicit original (interdisciplinary) research articles that present novel methods, share empirical studies, or present insightful case reports. We also welcome comprehensive reviews of the existing literature on XAI in healthcare, offering unique perspectives on the challenges, opportunities, and future trajectories. Furthermore, we are interested in practical implementations that showcase real-world, trustworthy AI-driven systems for healthcare delivery that highlight lessons learned.
We invite submissions related to the following topics (but not limited to):
- Theoretical foundations and practical applications of trustworthy healthcare AI: from design and development to deployment and integration.
- Transparency and responsibility of healthcare AI.
- Fairness and bias mitigation.
- Patient engagement.
- Clinical decision support.
- Patient safety.
- Privacy preservation.
- Clinical validation.
- Ethical, regulatory, and legal compliance.
Publishing Model: Open Access
Deadline: Sep 10, 2026
AI and Big Data-Driven Finance and Management
This collection aims to bring together cutting-edge research and practical advancements at the intersection of artificial intelligence, big data analytics, finance, and management. As AI technologies and data-driven methodologies increasingly shape the future of financial services, corporate governance, and industrial decision-making, there is a growing need to explore their applications, implications, and innovations in real-world contexts.
The scope of this collection includes, but is not limited to, the following areas:
- AI models for financial forecasting, fraud detection, credit risk assessment, and regulatory compliance
- Machine learning techniques for portfolio optimization, stock price prediction, and trading strategies
- Data-driven approaches in corporate decision-making, performance evaluation, and strategic planning
- Intelligent systems for industrial optimization, logistics, and supply chain management
- Fintech innovations, digital assets, and algorithmic finance
- Ethical, regulatory, and societal considerations in deploying AI across financial and managerial domains
By highlighting both theoretical developments and real-world applications, this collection seeks to offer valuable insights to researchers, practitioners, and policymakers. Contributions that emphasize interdisciplinary approaches, practical relevance, and explainable AI are especially encouraged.
This Collection supports and amplifies research related to SDG 8 and SDG 9.
Keywords: AI in Finance, Accountability, Applied Machine Learning, Artificial Intelligence, Big Data
Publishing Model: Open Access
Deadline: Apr 30, 2026
Please sign in or register for FREE
If you are a registered user on Research Communities by Springer Nature, please sign in