Background: Artificial intelligence (AI) is reshaping healthcare, presenting new opportunities in diagnostics, clinical decision support, workflow optimization, patient engagement, and population health. Yet important concerns remain about trust, transparency, bias, privacy, and the adequacy of existing regulatory frameworks. The objective of this study was to compare the perspectives of Healthcare Professionals (HCPs) and non-HCPs on the integration of AI in healthcare, with a focus on identifying perceived benefits, risks, ethical concerns, and barriers to adoption. Methods: We conducted an IRB-approved cross-sectional survey of adults aged â„18, sampling both HCPs and non-HCPs. The questionnaire assessed perceived benefits and risks of AI, trust in AI systems, health bot applications, privacy and ethical concerns, regulatory priorities, and views on AIâs role in clinical decision-making. Responses from HCPs and non-HCPs were compared using descriptive statistics and group-level difference testing. Results: A total of 297 participants completed the survey, including 189 HCPs and 108 non- HCPs. Both groups expressed strong agreement that AI can improve efficiency, enhance access to care, support diagnosis, reduce medical errors, and aid early disease detection. However, trust in AI systems remained limited: nearly two-thirds of respondents expressed no confidence in AIâs ability to ensure privacy, safeguard data, or make unbiased ethical decisions. HCPs demonstrated greater emphasis on safety, accountability, transparency, and regulatory oversight, particularly in high-risk clinical environments, whereas non-HCPs were more likely to endorse shared responsibility when AI causes harm. Across groups, the majority believed that AI should serve primarily as an assistive tool, with humans retaining decision-making authority. Concerns about cost, infrastructure, and digital literacy were prominent barriers to equitable AI adoption. Conclusions: Despite recognizing AIâs potential benefits, both clinicians and the public remain cautious about its risks and ethical limitations. These findings highlight the need for robust governance, transparent design, targeted education, and human-centered approaches to promote trustworthy, safe, and equitable AI integration in healthcare.