علی مالکی

مطالب وبلاگ

اخبار، اطلاعیه ها و...

پیشنهاد پایان‌نامه/رساله

پیشنهاد اول: استفاده از توجه (attention) در شبکه‌های مولد متخاصم (GAN) جهت بهبود کیفیت سیگنال ساختگی SSVEP

پیشنهاد دوم:  

استفاده از شبکه عصبی عمیق برای تخمین مقدار نسبت سیگنال به نویز (SNR) در سیگنال SSVEP

  • ایجاد یک پایگاه داده بسیار بزرگ ساختگی با برچسب SNR
  • ایجاد یک توصیف دوبعدی از سیگنال به نحوی که فرکانس تحریک و هارمونیک‌های آن دقیقا در مرکز بُعد متناظر با فرکانس باشد (یکی از بُعدها حتما فرکانس است).
  • آموزش شبکه عصبی عمیق CNN با دادگان ساختگی و برچسب SNR
  • استفاده از شبکه عصبی عمیق برای تخمین مقدار SNR سیگنال‌های واقعی

پیشنهاد سوم: 

بازشناسی فرکانس تحریک SSVEP با استفاده از شبکه عصبی کانولوشنی قابل توصیف (xCNN) به عنوان یک روش موثر برای درک نحوه کار مدل و چگونگی انجام تصمیم‌گیری‌های آن

Explainable Artificial Intelligence (XAI) is a branch of AI that focuses on developing models and algorithms that can provide understandable explanations for their decisions and predictions. The goal of XAI is to bridge the gap between the complex inner workings of AI systems and the need for transparency and trust in their outputs, especially when they impact human lives and society

.Traditional AI models, such as deep learning, are often referred to as "black boxes" because their decision-making processes are not easily interpretable by humans, including their designers. In contrast, XAI models are designed to provide insights into their decision-making processes, allowing users to understand the factors that influence their outputs

.The importance of XAI lies in its ability to address issues such as model accuracy, fairness, transparency, and potential biases. By understanding how an AI model arrives at a decision, users can better trust its outputs and identify potential issues or biases that need to be addressed

.There are several approaches to achieving explainability in AI, including:

  • Rule-based models: These models use a set of predefined rules to make decisions, which can be easily understood by humans. However, they may not be as accurate or flexible as other AI models
    Interpretable machine learning: This approach focuses on developing models that are inherently interpretable, such as decision trees or linear regression. These models provide insights into their decision-making processes but may sacrifice some accuracy compared to more complex model.
  • Post-hoc explanations: This approach involves providing explanations for the outputs of existing AI models. Techniques such as feature importance analysis or local surrogate models can be used to understand the factors that contribute to a specific prediction

While XAI is a rapidly evolving field, there are still challenges to be addressed. One of the main challenges is the lack of consensus on key terms and definitions, which can lead to confusion and inconsistency in research and implementation

. Additionally, achieving explainability in AI should be considered a secondary goal to AI effectiveness, as overly complex explanations may hinder the performance of the AI syst.

پیشنهاد چهارم: تشخیص مرگ ناگهانی قلبی با روش یادگیری عمیق

 

بازگشت به بلاگ
دسته بندی : موضوعات پژوهشی پیشنهادی , بازدید:210 , تاریخ انتشار : 1402/06/27