masto.es es uno de los varios servidores independientes de Mastodon que puedes usar para participar en el fediverso.
Bienvenidos a masto.es, el mayor servidor de Mastodon para hispanohablantes de temática general. Registros limitados actualmente a invitaciones.

Administrado por:

Estadísticas del servidor:

1,9 K
usuarios activos

#ExplainableAI

2 publicaciones2 participantes0 publicaciones hoy
Winbuzzer<p>Microsoft and UW Develop AI That Spots Breast Cancer by Learning What's Normal</p><p><a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/Microsoft" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Microsoft</span></a> <a href="https://mastodon.social/tags/MedicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MedicalAI</span></a> <a href="https://mastodon.social/tags/Healthcare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Healthcare</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a></p><p><a href="https://winbuzzer.com/2025/07/24/microsoft-and-uw-develop-ai-that-spots-breast-cancer-by-learning-whats-normal-xcxwbn" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">winbuzzer.com/2025/07/24/micro</span><span class="invisible">soft-and-uw-develop-ai-that-spots-breast-cancer-by-learning-whats-normal-xcxwbn</span></a></p>
Aneesh Sathe<p><strong>AI: Explainable Enough</strong></p><p class="">They look really juicy, she said. I was sitting in a small room with a faint chemical smell, doing one my first customer interviews. There is a sweet spot between going too deep and asserting a position. Good AI has to be just explainable enough to satisfy the user without overwhelming them with information. Luckily, I wasn’t new to the problem.&nbsp;</p><a href="https://aneeshsathe.com/wp-content/uploads/2025/07/image-from-rawpixel-id-3045306-jpeg.jpg" rel="nofollow noopener" target="_blank"></a>Nuthatcher atop Persimmons (ca. 1910) by Ohara Koson. Original from The Clark Art Institute. Digitally enhanced by rawpixel.<p>Coming from a microscopy and bio background with a strong inclination towards image analysis I had picked up deep learning as a way to be lazy in lab. Why bother figuring out features of interest when you can have a computer do it for you, was my angle. The issue was that in 2015 no biologist would accept any kind of deep learning analysis and definitely not if you couldn’t explain the details.&nbsp;</p><p>What the domain expert user doesn’t want:<br>– How a convolutional neural network works. Confidence scores, loss, AUC, are all meaningless to a biologist and also to a doctor.&nbsp;</p><p>What the domain expert desires:&nbsp;<br>– Help at the lowest level of detail that they care about.&nbsp;<br>– AI identifies features A, B, C, and that when you see A, B, &amp; C it is likely to be disease X.&nbsp;</p><p>Most users don’t care how a deep learning <em>really</em> works. So, if you start giving them details like the IoU score of the object detection bounding box or if it was YOLO or R-CNN that you used their eyes will glaze over and you will never get a customer. Draw a bounding box, heat map, or outline, with the predicted label and stop there. It’s also bad to go to the other extreme. If the AI just states the diagnosis for the whole image then the AI might be right, but the user does not get to participate in the process. Not to mention regulatory risk goes way up.</p><p>This applies beyong images, consider LLMs. No one with any expertise likes a black box. Today, why do LLMs generate code instead of directly doing the thing that the programmer is asking them to do? It’s because the programmer wants to ensure that the code “works” and they have the expertise to figure out if and when it goes wrong. It’s the same reason that vibe coding is great for prototyping but not for production and why frequent readers can spot AI patterns, ahem,&nbsp; easily.&nbsp; So in a Betty Crocker cake mix kind of way, let the user add the egg.&nbsp;</p><p>Building explainable-enough AI takes immense effort. It actually is easier to train AI to diagnose the whole image or to give details. Generating high-quality data at that just right level is very difficult and expensive. However, do it right and the effort pays off. The outcome is an <em>AI-Human causal prediction machine</em>. Where the causes, i.e. the median level features, inform the user and build confidence towards the final outcome. The deep learning part is still a black box but the user doesn’t mind because you aid their thinking.&nbsp;</p><p>I’m excited by some new developments like <a href="https://rex-xai.readthedocs.io/en/stable/" rel="nofollow noopener" target="_blank">REX</a> which sort of retro-fit causality onto usual deep learning models. With improvements in performance user preferences for detail may change, but I suspect that need for AI to be explainable enough will remain. Perhaps we will even have custom labels like ‘juicy’.</p><p><a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai/" target="_blank">#AI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-adoption/" target="_blank">#AIAdoption</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-communication/" target="_blank">#AICommunication</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-explainability/" target="_blank">#AIExplainability</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-for-doctors/" target="_blank">#AIForDoctors</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-in-healthcare/" target="_blank">#AIInHealthcare</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-in-the-wild/" target="_blank">#AIInTheWild</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-product-design/" target="_blank">#AIProductDesign</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/ai-ux/" target="_blank">#AIUX</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/artificial-intelligence/" target="_blank">#artificialIntelligence</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/betty-crocker-thinking/" target="_blank">#BettyCrockerThinking</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/biomedical-ai/" target="_blank">#BiomedicalAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/business/" target="_blank">#Business</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/causal-ai/" target="_blank">#CausalAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/data-product-design/" target="_blank">#DataProductDesign</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/deep-learning/" target="_blank">#DeepLearning</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/explainable-ai/" target="_blank">#ExplainableAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/human-ai-interaction/" target="_blank">#HumanAIInteraction</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/image-analysis/" target="_blank">#ImageAnalysis</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/llms/" target="_blank">#LLMs</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/machine-learning-2/" target="_blank">#MachineLearning</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/startup-lessons/" target="_blank">#StartupLessons</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/statistics/" target="_blank">#statistics</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/tech-metaphors/" target="_blank">#TechMetaphors</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/tech-philosophy/" target="_blank">#techPhilosophy</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/trust-in-ai/" target="_blank">#TrustInAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/user-centered-ai/" target="_blank">#UserCenteredAI</a> <a rel="nofollow noopener" class="hashtag u-tag u-category" href="https://aneeshsathe.com/tag/xai/" target="_blank">#XAI</a></p>
IJCAI Conference<p>Excited to welcome Cynthia Rudin, Duke University, as an invited speaker at <a href="https://mastodon.social/tags/IJCAI2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>IJCAI2025</span></a> in Montreal! 🇨🇦 Known for her groundbreaking work in interpretable machine learning, Prof. Rudin will deliver her IJCAI-25 John McCarthy Award lecture. </p><p>🎥Why <a href="https://mastodon.social/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a>? <a href="https://youtu.be/9kzO5CKzFxQ" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="">youtu.be/9kzO5CKzFxQ</span><span class="invisible"></span></a></p>
Anita Graser 🇪🇺🇺🇦🇬🇪<p>🤩 <a href="https://fosstodon.org/tags/MobilityDataAnalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MobilityDataAnalytics</span></a> &amp; <a href="https://fosstodon.org/tags/GIScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GIScience</span></a> all around:</p><p>Attending the <span class="h-card" translate="no"><a href="https://bird.makeup/users/emeraldseu" class="u-url mention" rel="nofollow noopener" target="_blank">@<span>emeraldseu</span></a></span> GA today. Presented progress on <a href="https://fosstodon.org/tags/Trajectools" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Trajectools</span></a> and our <a href="https://fosstodon.org/tags/explainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>explainableAI</span></a> &amp; <a href="https://fosstodon.org/tags/activeLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>activeLearning</span></a> for <a href="https://fosstodon.org/tags/MobilityDataScience" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MobilityDataScience</span></a>. While simultaneously traveling to <a href="https://fosstodon.org/tags/AGIT2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGIT2025</span></a> 🚄</p>
isws<p>CRISIS IN MACHINE LEARNING - Semantics to the Rescue<br>Frank van Harmelen starts his keynote at ISWS 2025 with this headline from "The AI Times"<br>So, what is this crisis about? There are the following (still) unsoloved problems in AI research: <br>- Learning from small data<br>- Explainable AI<br>- Updating<br>- Learning by explaining</p><p><a href="https://sigmoid.social/tags/isws2025" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>isws2025</span></a> <a href="https://sigmoid.social/tags/llms" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>llms</span></a> <a href="https://sigmoid.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://sigmoid.social/tags/semanticweb" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>semanticweb</span></a> <a href="https://sigmoid.social/tags/knowledgegraphs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>knowledgegraphs</span></a> <a href="https://sigmoid.social/tags/explainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>explainableAI</span></a> <a href="https://sigmoid.social/tags/summerschool" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>summerschool</span></a> <a href="https://sigmoid.social/tags/keynote" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>keynote</span></a></p>
Sanjay Mohindroo<p>Delve into the darker realms of artificial intelligence with this reflective exploration of AI bias, toxic data practices, and ethical dilemmas. Discover the challenges and opportunities facing IT leaders as they navigate the complexities of AI technology. <a href="https://social.vivaldi.net/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://social.vivaldi.net/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://social.vivaldi.net/tags/DataEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataEthics</span></a> <a href="https://social.vivaldi.net/tags/TechnologyEthics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechnologyEthics</span></a> <a href="https://social.vivaldi.net/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a> <a href="https://social.vivaldi.net/tags/ChatGPT" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ChatGPT</span></a> <a href="https://social.vivaldi.net/tags/EthicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicalAI</span></a> <a href="https://social.vivaldi.net/tags/Regulation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Regulation</span></a> <a href="https://social.vivaldi.net/tags/AGI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AGI</span></a> <a href="https://social.vivaldi.net/tags/SanjayMohindroo" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SanjayMohindroo</span></a><br><a href="https://medium.com/@sanjay.mohindroo66/the-dark-side-of-ai-navigating-ethical-waters-in-a-digital-era-b75bb78bbe5a" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">medium.com/@sanjay.mohindroo66</span><span class="invisible">/the-dark-side-of-ai-navigating-ethical-waters-in-a-digital-era-b75bb78bbe5a</span></a></p>
Tim Green<p>AI's growing power demands transparency—understanding how decisions are made is key to trust and accountability. Explainable AI (XAI) bridges the gap between black-box algorithms and human collaboration. <br>Discover more at <a href="https://rawveg.substack.com/p/unlocking-ais-mysteries" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">rawveg.substack.com/p/unlockin</span><span class="invisible">g-ais-mysteries</span></a> <br><a href="https://me.dm/tags/HumanInTheLoop" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>HumanInTheLoop</span></a> <a href="https://me.dm/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://me.dm/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a> <a href="https://me.dm/tags/TechInnovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TechInnovation</span></a></p>
CSBJ<p>🧬 Could AI deliver skin cancer diagnoses with the clarity and reasoning of a dermatologist?</p><p>🔗 A two-step concept-based approach for enhanced interpretability and trust in skin lesion diagnosis. DOI: <a href="https://doi.org/10.1016/j.csbj.2025.02.013" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.csbj.2025.02</span><span class="invisible">.013</span></a></p><p>📚 CSBJ Smart Hospital: <a href="https://www.csbj.org/smarthospital" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">csbj.org/smarthospital</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/AIinHealthcare" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinHealthcare</span></a> <a href="https://mastodon.social/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a> <a href="https://mastodon.social/tags/SkinCancer" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SkinCancer</span></a> <a href="https://mastodon.social/tags/VLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>VLM</span></a> <a href="https://mastodon.social/tags/LLM" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLM</span></a> <a href="https://mastodon.social/tags/MedicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MedicalAI</span></a> <a href="https://mastodon.social/tags/TrustworthyAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TrustworthyAI</span></a> <a href="https://mastodon.social/tags/Dermatology" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Dermatology</span></a> <a href="https://mastodon.social/tags/XAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>XAI</span></a> <a href="https://mastodon.social/tags/PrecisionMedicine" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PrecisionMedicine</span></a></p>
Miguel Afonso Caetano<p>"Finally, AI can fact-check itself. One large language model-based chatbot can now trace its outputs to the exact original data sources that informed them.</p><p>Developed by the Allen Institute for Artificial Intelligence (Ai2), OLMoTrace, a new feature in the Ai2 Playground, pinpoints data sources behind text responses from any model in the OLMo (Open Language Model) project.</p><p>OLMoTrace identifies the exact pre-training document behind a response — including full, direct quote matches. It also provides source links. To do so, the underlying technology uses a process called “exact-match search” or “string matching.”</p><p>“We introduced OLMoTrace to help people understand why LLMs say the things they do from the lens of their training data,” Jiacheng Liu, a University of Washington Ph.D. candidate and Ai2 researcher, told The New Stack.</p><p>“By showing that a lot of things generated by LLMs are traceable back to their training data, we are opening up the black boxes of how LLMs work, increasing transparency and our trust in them,” he added.</p><p>To date, no other chatbot on the market provides the ability to trace a model’s response back to specific sources used within its training data. This makes the news a big stride for AI visibility and transparency."</p><p><a href="https://thenewstack.io/llms-can-now-trace-their-outputs-to-specific-training-data/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">thenewstack.io/llms-can-now-tr</span><span class="invisible">ace-their-outputs-to-specific-training-data/</span></a></p><p><a href="https://tldr.nettime.org/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://tldr.nettime.org/tags/GenerativeAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>GenerativeAI</span></a> <a href="https://tldr.nettime.org/tags/LLMs" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>LLMs</span></a> <a href="https://tldr.nettime.org/tags/Chatbots" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Chatbots</span></a> <a href="https://tldr.nettime.org/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a> <a href="https://tldr.nettime.org/tags/Traceability" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Traceability</span></a> <a href="https://tldr.nettime.org/tags/AITraining" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AITraining</span></a></p>
Tycoon World<p>AI in Banking Security: Revolution &amp; Risks</p><p><a href="https://mastodon.social/tags/TycoonWorld" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>TycoonWorld</span></a> <a href="https://mastodon.social/tags/AIinBanking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinBanking</span></a> <a href="https://mastodon.social/tags/BankingSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BankingSecurity</span></a> <a href="https://mastodon.social/tags/CyberSecurityAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberSecurityAI</span></a> <a href="https://mastodon.social/tags/FinTechSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FinTechSecurity</span></a> <a href="https://mastodon.social/tags/ArtificialIntelligence" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ArtificialIntelligence</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a> <a href="https://mastodon.social/tags/AnomalyDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AnomalyDetection</span></a> <a href="https://mastodon.social/tags/BehavioralAnalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BehavioralAnalytics</span></a> <a href="https://mastodon.social/tags/ThreatDetection" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ThreatDetection</span></a> <a href="https://mastodon.social/tags/FraudPrevention" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FraudPrevention</span></a> <a href="https://mastodon.social/tags/PredictiveAnalytics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>PredictiveAnalytics</span></a> <a href="https://mastodon.social/tags/EthicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>EthicalAI</span></a> <a href="https://mastodon.social/tags/DataPrivacy" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DataPrivacy</span></a> <a href="https://mastodon.social/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a> <a href="https://mastodon.social/tags/AdversarialAttacks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AdversarialAttacks</span></a> <a href="https://mastodon.social/tags/BankingInnovation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BankingInnovation</span></a> <a href="https://mastodon.social/tags/FinancialSecurity" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FinancialSecurity</span></a> <a href="https://mastodon.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://mastodon.social/tags/AIrisks" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIrisks</span></a> <a href="https://mastodon.social/tags/DigitalBanking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>DigitalBanking</span></a> <a href="https://mastodon.social/tags/AIinFinance" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIinFinance</span></a> <a href="https://mastodon.social/tags/AIandCybercrime" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIandCybercrime</span></a> <a href="https://mastodon.social/tags/SmartBanking" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>SmartBanking</span></a> <a href="https://mastodon.social/tags/FinTechTrends" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>FinTechTrends</span></a> <a href="https://mastodon.social/tags/CyberRiskMitigation" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>CyberRiskMitigation</span></a></p><p><a href="https://tycoonworld.in/ai-in-banking-security-revolution-risks/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">tycoonworld.in/ai-in-banking-s</span><span class="invisible">ecurity-revolution-risks/</span></a></p>
CSBJ<p>🧬Can we trust AI in bioinformatics if we don’t understand how it makes decisions? </p><p>As AI becomes central to bioinformatics, the opacity of its decision-making remains a major concern. </p><p>🔗 Demystifying the Black Box: A Survey on Explainable Artificial Intelligence (XAI) in Bioinformatics. Computational and Structural Biotechnology Journal, DOI: <a href="https://doi.org/10.1016/j.csbj.2024.12.027" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">doi.org/10.1016/j.csbj.2024.12</span><span class="invisible">.027</span></a></p><p>📚 CSBJ: <a href="https://www.csbj.org/" rel="nofollow noopener" translate="no" target="_blank"><span class="invisible">https://www.</span><span class="">csbj.org/</span><span class="invisible"></span></a></p><p><a href="https://mastodon.social/tags/XAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>XAI</span></a> <a href="https://mastodon.social/tags/Bioinformatics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Bioinformatics</span></a> <a href="https://mastodon.social/tags/AIethics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AIethics</span></a> <a href="https://mastodon.social/tags/ExplainableAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>ExplainableAI</span></a> <a href="https://mastodon.social/tags/Genomics" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>Genomics</span></a> <a href="https://mastodon.social/tags/AI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>AI</span></a> <a href="https://mastodon.social/tags/BiomedicalAI" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>BiomedicalAI</span></a> <a href="https://mastodon.social/tags/MachineLearning" class="mention hashtag" rel="nofollow noopener" target="_blank">#<span>MachineLearning</span></a></p>

📢 PyCaret’s dashboards make machine learning models EXPLAINABLE! 🧠📊

✅ Understand feature importance
✅ Visualize model accuracy & precision
✅ Perform “What-if” analysis to test predictions

A must-read guide featuring a diabetes prediction case study! 🚀

Read the details: medium.com/@omkamal/understand

Medium · Understanding PyCaret Amazing Dashboards: A Complete GuidePor OmarEbnElKhattab Hosney

Applications are now open for the 2025 International Semantic Web Research Summer School - #ISWS2025
in Bertinoro, Italy, from June 8-14, 2025
Topic: Knowledge Graphs for Reliable AI
Application Deadline: March 25, 2025
Webpage: 2025.semanticwebschool.org/

Great keynote speakers: Frank van Harmelen (VU), Natasha Noy (Google), Enrico Motta (KMI)

#semanticweb #knowledgegraphs #AI #generativeAI #responsibleAI #explainableAI #reliableAI @albertmeronyo @AxelPolleres @lysander07

#ITByte: #ExplainableAI (#XAI), often known as Interpretable AI, or Explainable Machine Learning (XML), either refers to an AI system over which it is possible for humans to retain intellectual oversight, or to the methods to achieve this.

The main focus is usually on the reasoning behind the decisions or predictions made by the AI which are made more understandable and transparent.

knowledgezone.co.in/posts/Expl