[{"data":1,"prerenderedAt":714},["ShallowReactive",2],{"content-query-aZ29QHVofM":3,"content-navigation-8C37fagqQL":525},{"_path":4,"_dir":5,"_draft":6,"_partial":6,"_locale":7,"title":8,"description":9,"heading":10,"abstract":11,"year":12,"tags":13,"schemaOrg":17,"body":67,"_type":497,"_id":498,"_source":499,"_file":500,"_stem":501,"_extension":502,"head":503},"/projects/vue-mcp","projects",false,"","Vue Docs MCP: Live Vue ecosystem documentation for AI assistants","Open-source MCP server giving AI assistants grounded, up-to-date access to Vue.js, Nuxt, Vite, Pinia, VueUse, and more — hybrid search over official docs with 4.8/5 answer quality.","Vue Docs MCP","Built an MCP server that gives AI assistants direct access to live Vue ecosystem documentation. Hybrid retrieval over 8 frameworks, structure-aware chunking, deterministic query pipeline, free hosted at mcp.vue-mcp.org.","2026",[14,15,16],"On Github","MCP","RAG",[18,42],{"@context":19,"@type":20,"name":10,"headline":8,"description":21,"datePublished":22,"dateModified":23,"author":24,"codeRepository":33,"url":34,"programmingLanguage":35,"runtimePlatform":36,"license":37,"operatingSystem":38,"applicationCategory":39,"keywords":40,"inLanguage":41},"https://schema.org","SoftwareSourceCode","Open-source MCP server providing AI assistants with grounded, up-to-date access to Vue ecosystem documentation through hybrid semantic and keyword search.","2025-03-01","2025-04-01",{"@type":25,"givenName":26,"familyName":27,"name":28,"url":29,"sameAs":30},"Person","Joel","Barmettler","Joel Barmettler","https://joelbarmettler.xyz",[31,32],"https://www.linkedin.com/in/joel-barmettler-b9ab361b7","https://github.com/joelbarmettlerUZH","https://github.com/joelbarmettlerUZH/vue-mcp","https://vue-mcp.org","Python","Docker","https://fsl.software/FSL-1.1-ALv2.template.md","Linux","DeveloperApplication","MCP, Model Context Protocol, Vue.js, Nuxt, Vite, documentation, AI tools, hybrid search, open source","en",{"@context":19,"@type":43,"mainEntity":44},"FAQPage",[45,51,55,59,63],{"@type":46,"name":47,"acceptedAnswer":48},"Question","What is Vue Docs MCP?",{"@type":49,"text":50},"Answer","Vue Docs MCP is an open-source MCP server that gives AI assistants direct access to live Vue ecosystem documentation. It covers 8 frameworks (Vue.js, Vue Router, VueUse, Vite, Vitest, Nuxt, Pinia, Vue DevTools) and uses hybrid semantic and keyword search to ground every answer in official docs.",{"@type":46,"name":52,"acceptedAnswer":53},"How does Vue Docs MCP compare to general documentation tools?",{"@type":49,"text":54},"Vue Docs MCP scores 4.82/5 on Vue.js questions with 98.7% API recall, compared to Context7's 2.41/5 score. It achieves this through structure-aware chunking, hybrid retrieval (dense + BM25), entity boosting, and deterministic query processing that avoids LLM calls at query time.",{"@type":46,"name":56,"acceptedAnswer":57},"What frameworks does Vue Docs MCP support?",{"@type":49,"text":58},"Currently 8 frameworks: Vue.js, Vue Router, VueUse, Vite, Vitest, Nuxt, Pinia, and Vue DevTools. The adapter-driven architecture makes adding new frameworks straightforward. Each framework gets its own search tool, API lookup, and documentation resources.",{"@type":46,"name":60,"acceptedAnswer":61},"How does the hybrid search work?",{"@type":49,"text":62},"Queries are processed through a 6-step pipeline: embed the query with Jina AI and generate BM25 sparse vectors, run hybrid search in Qdrant with reciprocal rank fusion, resolve HyPE (synthetic question) matches back to parent chunks, expand via cross-references, rerank with Jina Reranker, and reconstruct results in documentation reading order.",{"@type":46,"name":64,"acceptedAnswer":65},"Can I self-host Vue Docs MCP?",{"@type":49,"text":66},"Yes. The project is fully open source with Docker Compose configurations for local and production deployment. It requires PostgreSQL, Qdrant, and API keys for Jina AI embeddings. The ingestion pipeline runs every 24 hours to keep documentation current.",{"type":68,"children":69,"toc":483},"root",[70,78,84,99,106,128,133,139,152,164,193,199,204,277,282,288,293,300,313,318,324,329,393,398,404,409,415,427,433,438,452,468],{"type":71,"tag":72,"props":73,"children":75},"element","h1",{"id":74},"vue-docs-mcp-live-vue-ecosystem-documentation-for-ai-assistants",[76],{"type":77,"value":8},"text",{"type":71,"tag":79,"props":80,"children":81},"p",{},[82],{"type":77,"value":83},"AI assistants hallucinate APIs. They suggest Vue 2 patterns in Vue 3 projects, invent options that do not exist, and confidently reference deprecated features. The root cause is simple: their training data has a cutoff, and the Vue ecosystem moves faster than the cutoff.",{"type":71,"tag":79,"props":85,"children":86},{},[87,89,97],{"type":77,"value":88},"Vue Docs MCP is a ",{"type":71,"tag":90,"props":91,"children":94},"a",{"href":33,"rel":92},[93],"nofollow",[95],{"type":77,"value":96},"source-available",{"type":77,"value":98}," MCP server that gives AI assistants direct, searchable access to the official documentation of 8 Vue ecosystem frameworks. Every answer is grounded in live docs from vuejs.org and friends, re-indexed every 24 hours. No API keys, no login, free to use.",{"type":71,"tag":100,"props":101,"children":103},"h2",{"id":102},"the-problem",[104],{"type":77,"value":105},"The problem",{"type":71,"tag":79,"props":107,"children":108},{},[109,111,118,120,126],{"type":77,"value":110},"When you ask an AI assistant about ",{"type":71,"tag":112,"props":113,"children":115},"code",{"className":114},[],[116],{"type":77,"value":117},"defineModel",{"type":77,"value":119}," or Nuxt's ",{"type":71,"tag":112,"props":121,"children":123},{"className":122},[],[124],{"type":77,"value":125},"useFetch",{"type":77,"value":127},", the answer you get depends on what was in the training corpus. If the docs changed since the cutoff, the AI does not know. It fills the gap with plausible-sounding but incorrect information. General-purpose documentation tools exist, but they treat all docs the same: fixed-size token windows, no understanding of Vue-specific patterns like the Options API vs. Composition API split, and no entity awareness for API surfaces.",{"type":71,"tag":79,"props":129,"children":130},{},[131],{"type":77,"value":132},"I built a purpose-built solution: a documentation server that understands the structure of Vue ecosystem docs and retrieves precisely what the AI needs.",{"type":71,"tag":100,"props":134,"children":136},{"id":135},"how-it-works",[137],{"type":77,"value":138},"How it works",{"type":71,"tag":79,"props":140,"children":141},{},[142,144,150],{"type":77,"value":143},"Connect your AI assistant to ",{"type":71,"tag":112,"props":145,"children":147},{"className":146},[],[148],{"type":77,"value":149},"mcp.vue-mcp.org/mcp",{"type":77,"value":151}," and start coding. For Claude Code:",{"type":71,"tag":153,"props":154,"children":159},"pre",{"className":155,"code":157,"language":158,"meta":7},[156],"language-bash","claude mcp add vue-mcp --transport streamable-http https://mcp.vue-mcp.org/mcp\n","bash",[160],{"type":71,"tag":112,"props":161,"children":162},{"__ignoreMap":7},[163],{"type":77,"value":157},{"type":71,"tag":79,"props":165,"children":166},{},[167,169,175,177,183,185,191],{"type":77,"value":168},"The server exposes three types of MCP tools per framework: ",{"type":71,"tag":112,"props":170,"children":172},{"className":171},[],[173],{"type":77,"value":174},"docs_search",{"type":77,"value":176}," for semantic search scoped to specific topics, ",{"type":71,"tag":112,"props":178,"children":180},{"className":179},[],[181],{"type":77,"value":182},"api_lookup",{"type":77,"value":184}," for instant fuzzy-matched API reference, and ",{"type":71,"tag":112,"props":186,"children":188},{"className":187},[],[189],{"type":77,"value":190},"get_related",{"type":77,"value":192}," for discovering connected APIs and documentation. When an AI assistant needs to answer a Vue question, it calls these tools instead of relying on its training data.",{"type":71,"tag":100,"props":194,"children":196},{"id":195},"supported-frameworks",[197],{"type":77,"value":198},"Supported frameworks",{"type":71,"tag":79,"props":200,"children":201},{},[202],{"type":77,"value":203},"Eight frameworks are indexed today, each with its own dedicated tools and resources:",{"type":71,"tag":205,"props":206,"children":207},"ul",{},[208,220,230,240,250],{"type":71,"tag":209,"props":210,"children":211},"li",{},[212,218],{"type":71,"tag":213,"props":214,"children":215},"strong",{},[216],{"type":77,"value":217},"Vue.js",{"type":77,"value":219}," — 4.82/5 answer quality, 98.7% API recall",{"type":71,"tag":209,"props":221,"children":222},{},[223,228],{"type":71,"tag":213,"props":224,"children":225},{},[226],{"type":77,"value":227},"Vue Router",{"type":77,"value":229}," — 4.78/5, 88.8% API recall",{"type":71,"tag":209,"props":231,"children":232},{},[233,238],{"type":71,"tag":213,"props":234,"children":235},{},[236],{"type":77,"value":237},"VueUse",{"type":77,"value":239}," — 4.89/5, 100% API recall",{"type":71,"tag":209,"props":241,"children":242},{},[243,248],{"type":71,"tag":213,"props":244,"children":245},{},[246],{"type":77,"value":247},"Vite",{"type":77,"value":249}," — 4.94/5, 87.8% API recall",{"type":71,"tag":209,"props":251,"children":252},{},[253,258,260,265,266,271,272],{"type":71,"tag":213,"props":254,"children":255},{},[256],{"type":77,"value":257},"Vitest",{"type":77,"value":259},", ",{"type":71,"tag":213,"props":261,"children":262},{},[263],{"type":77,"value":264},"Nuxt",{"type":77,"value":259},{"type":71,"tag":213,"props":267,"children":268},{},[269],{"type":77,"value":270},"Pinia",{"type":77,"value":259},{"type":71,"tag":213,"props":273,"children":274},{},[275],{"type":77,"value":276},"Vue DevTools",{"type":71,"tag":79,"props":278,"children":279},{},[280],{"type":77,"value":281},"These scores come from an evaluation suite that tests hundreds of questions per framework and compares against competitors using an LLM judge. Against Context7, the closest general-purpose alternative, Vue Docs MCP scores 4.82/5 vs. 2.41/5 on Vue.js questions.",{"type":71,"tag":100,"props":283,"children":285},{"id":284},"architecture",[286],{"type":77,"value":287},"Architecture",{"type":71,"tag":79,"props":289,"children":290},{},[291],{"type":77,"value":292},"The system splits into two pipelines: offline ingestion and online retrieval.",{"type":71,"tag":294,"props":295,"children":297},"h3",{"id":296},"ingestion-pipeline",[298],{"type":77,"value":299},"Ingestion pipeline",{"type":71,"tag":79,"props":301,"children":302},{},[303,305,311],{"type":77,"value":304},"Every 24 hours, the ingestion pipeline clones the latest documentation sources, parses them into structured chunks, and indexes them. The parsing is structure-aware: it chunks at heading boundaries rather than fixed token windows, keeping code blocks together with their explanations. Each framework has a ",{"type":71,"tag":112,"props":306,"children":308},{"className":307},[],[309],{"type":77,"value":310},"SourceAdapter",{"type":77,"value":312}," that handles framework-specific quirks (stripping Options API blocks from Vue docs, generating TypeDoc references for Vue Router, parsing frontmatter-based sorting in VueUse).",{"type":71,"tag":79,"props":314,"children":315},{},[316],{"type":77,"value":317},"After parsing, the pipeline extracts API entities deterministically (no LLM involved), generates contextual prefixes for ambiguous chunks using Gemini, creates synthetic developer questions (HyPE — Hypothetical Previous Embeddings) for key sections, and embeds everything with Jina AI into Qdrant alongside BM25 sparse vectors. PostgreSQL stores entities, synonyms, page metadata, and index state.",{"type":71,"tag":294,"props":319,"children":321},{"id":320},"query-pipeline",[322],{"type":77,"value":323},"Query pipeline",{"type":71,"tag":79,"props":325,"children":326},{},[327],{"type":77,"value":328},"When an AI assistant calls a tool, the query goes through a 6-step deterministic pipeline:",{"type":71,"tag":330,"props":331,"children":332},"ol",{},[333,343,353,363,373,383],{"type":71,"tag":209,"props":334,"children":335},{},[336,341],{"type":71,"tag":213,"props":337,"children":338},{},[339],{"type":77,"value":340},"Embed and detect",{"type":77,"value":342}," — Generate dense (Jina) and sparse (BM25) vectors, extract mentioned entities with fuzzy matching via rapidfuzz",{"type":71,"tag":209,"props":344,"children":345},{},[346,351],{"type":71,"tag":213,"props":347,"children":348},{},[349],{"type":77,"value":350},"Hybrid search",{"type":77,"value":352}," — Query Qdrant with reciprocal rank fusion combining dense and sparse results, boosted by entity matches",{"type":71,"tag":209,"props":354,"children":355},{},[356,361],{"type":71,"tag":213,"props":357,"children":358},{},[359],{"type":77,"value":360},"Resolve HyPE",{"type":77,"value":362}," — Map any hits on synthetic questions back to their parent documentation chunks",{"type":71,"tag":209,"props":364,"children":365},{},[366,371],{"type":71,"tag":213,"props":367,"children":368},{},[369],{"type":77,"value":370},"Expand",{"type":77,"value":372}," — Follow cross-references at three priority levels to pull in related context",{"type":71,"tag":209,"props":374,"children":375},{},[376,381],{"type":71,"tag":213,"props":377,"children":378},{},[379],{"type":77,"value":380},"Rerank",{"type":77,"value":382}," — Score candidates with Jina Reranker, discard anything below threshold",{"type":71,"tag":209,"props":384,"children":385},{},[386,391],{"type":71,"tag":213,"props":387,"children":388},{},[389],{"type":77,"value":390},"Reconstruct",{"type":77,"value":392}," — Reassemble results in documentation reading order, merging adjacent chunks into coherent sections",{"type":71,"tag":79,"props":394,"children":395},{},[396],{"type":77,"value":397},"No LLM is called at query time. This keeps latency low, costs near zero (~$0.0005 per query), and eliminates the risk of compounding hallucinations.",{"type":71,"tag":100,"props":399,"children":401},{"id":400},"mcp-resources-and-prompts",[402],{"type":77,"value":403},"MCP resources and prompts",{"type":71,"tag":79,"props":405,"children":406},{},[407],{"type":77,"value":408},"Beyond search tools, the server exposes MCP resources that let AI assistants browse documentation proactively: full tables of contents, raw markdown pages, complete API indices, and per-entity detail views. It also registers structured prompts for common workflows like debugging issues, comparing APIs, and planning migrations. These give the AI a systematic approach rather than ad-hoc searching.",{"type":71,"tag":100,"props":410,"children":412},{"id":411},"adapter-driven-extensibility",[413],{"type":77,"value":414},"Adapter-driven extensibility",{"type":71,"tag":79,"props":416,"children":417},{},[418,420,425],{"type":77,"value":419},"Adding a new framework means implementing a ",{"type":71,"tag":112,"props":421,"children":423},{"className":422},[],[424],{"type":77,"value":310},{"type":77,"value":426}," with hooks for cloning, file discovery, content cleaning, entity dictionary construction, and sort-key generation. The adapter pattern isolates framework-specific logic (Vue's Options/Composition API split, Nuxt's auto-imports, VueUse's function-per-page structure) from the shared parsing, embedding, and retrieval infrastructure. The core codebase is ~3,000 lines of Python.",{"type":71,"tag":100,"props":428,"children":430},{"id":429},"deployment",[431],{"type":77,"value":432},"Deployment",{"type":71,"tag":79,"props":434,"children":435},{},[436],{"type":77,"value":437},"The production stack runs on an Infomaniak OpenStack VM with Docker Compose: FastMCP server, PostgreSQL 17, Qdrant 1.17, and Traefik for TLS and rate limiting. The ingestion container self-schedules every 24 hours and uses content hashing for incremental updates. The server polls PostgreSQL every 60 seconds for hot-reload, so new documentation appears without downtime. CI/CD via GitHub Actions handles automated builds, encrypted backups, and deployment.",{"type":71,"tag":79,"props":439,"children":440},{},[441,443,450],{"type":77,"value":442},"The entire stack is source-available under the ",{"type":71,"tag":90,"props":444,"children":447},{"href":445,"rel":446},"https://fsl.software",[93],[448],{"type":77,"value":449},"FSL-1.1-ALv2",{"type":77,"value":451}," license (converting to Apache 2.0 after two years).",{"type":71,"tag":79,"props":453,"children":454},{},[455,460,462],{"type":71,"tag":213,"props":456,"children":457},{},[458],{"type":77,"value":459},"Repository:",{"type":77,"value":461}," ",{"type":71,"tag":90,"props":463,"children":465},{"href":33,"rel":464},[93],[466],{"type":77,"value":467},"github.com/joelbarmettlerUZH/vue-mcp",{"type":71,"tag":79,"props":469,"children":470},{},[471,476,477],{"type":71,"tag":213,"props":472,"children":473},{},[474],{"type":77,"value":475},"Documentation:",{"type":77,"value":461},{"type":71,"tag":90,"props":478,"children":480},{"href":34,"rel":479},[93],[481],{"type":77,"value":482},"vue-mcp.org",{"title":7,"searchDepth":484,"depth":484,"links":485},2,[486,487,488,489,494,495,496],{"id":102,"depth":484,"text":105},{"id":135,"depth":484,"text":138},{"id":195,"depth":484,"text":198},{"id":284,"depth":484,"text":287,"children":490},[491,493],{"id":296,"depth":492,"text":299},3,{"id":320,"depth":492,"text":323},{"id":400,"depth":484,"text":403},{"id":411,"depth":484,"text":414},{"id":429,"depth":484,"text":432},"markdown","content:3.projects:2.vue-mcp.md","content","3.projects/2.vue-mcp.md","3.projects/2.vue-mcp","md",{"script":504},[505],{"type":506,"key":507,"nodes":508,"data-nuxt-schema-org":524},"application/ld+json","schema-org-graph",[509,512],{"@context":19,"@type":20,"name":10,"headline":8,"description":21,"datePublished":22,"dateModified":23,"author":510,"codeRepository":33,"url":34,"programmingLanguage":35,"runtimePlatform":36,"license":37,"operatingSystem":38,"applicationCategory":39,"keywords":40,"inLanguage":41},{"@type":25,"givenName":26,"familyName":27,"name":28,"url":29,"sameAs":511},[31,32],{"@context":19,"@type":43,"mainEntity":513},[514,516,518,520,522],{"@type":46,"name":47,"acceptedAnswer":515},{"@type":49,"text":50},{"@type":46,"name":52,"acceptedAnswer":517},{"@type":49,"text":54},{"@type":46,"name":56,"acceptedAnswer":519},{"@type":49,"text":58},{"@type":46,"name":60,"acceptedAnswer":521},{"@type":49,"text":62},{"@type":46,"name":64,"acceptedAnswer":523},{"@type":49,"text":66},true,[526,540,556,572,583,668],{"title":527,"_path":528,"children":529,"icon":539},"About","/about",[530,533,536],{"title":531,"_path":532},"Joel Barmettler - AI Engineer, Researcher, and Entrepreneur","/about/about-me",{"title":534,"_path":535},"What Drives Me - Research Focus and Philosophy on AI Systems","/about/what-drives-me",{"title":537,"_path":538},"Technical Skills and Expertise - AI, ML, Infrastructure, and Web Development","/about/skills","📁",{"title":541,"_path":542,"children":543,"icon":539},"Career","/career",[544,547,550,553],{"title":545,"_path":546},"Building the AI Business Area at bbv Software Services","/career/bbv",{"title":548,"_path":549},"PolygonSoftware: Building a tech company during university","/career/polygon-software",{"title":551,"_path":552},"Machine learning for semiconductor quality control at BESI","/career/besi",{"title":554,"_path":555},"Data engineering for cryptocurrency analytics at CoinPaper","/career/coinpaper",{"title":557,"_path":558,"children":559,"icon":539},"Research","/research",[560,563,566,569],{"title":561,"_path":562},"The Invisible Coalition Partner: How LLMs Vote When Democracy Gets Concrete","/research/invisible-coalition-partner",{"title":564,"_path":565},"ConceptFormer: Graph-native grounding of LLMs via latent concept injection","/research/masters-thesis",{"title":567,"_path":568},"Airspace auction simulator for urban drone traffic","/research/masters-project",{"title":570,"_path":571},"Physical sky rendering engine for appleseed","/research/bachelors-thesis",{"title":573,"_path":574,"children":575,"icon":539},"Projects","/projects",[576,579,582],{"title":577,"_path":578},"md-reheader: Restoring heading hierarchy in PDF-extracted markdown","/projects/md-reheader",{"title":580,"_path":581},"Slidev MCP: AI-powered presentation generation with shareable links","/projects/slidev-mcp",{"title":8,"_path":4},{"title":584,"_path":585,"children":586,"icon":539},"Podcast","/podcast",[587,590,593,596,599,602,605,608,611,614,617,620,623,626,629,632,635,638,641,644,647,650,653,656,659,662,665],{"title":588,"_path":589},"Measuring political bias in language models: systematic analysis using Swiss Smart Vote data","/podcast/political-bias-in-language-models",{"title":591,"_path":592},"DeepSeek R1: pure reinforcement learning for reasoning and why distillation changes everything","/podcast/deepseek-r1-reasoning",{"title":594,"_path":595},"DeepSeek V3: how mixture-of-experts and multi-token prediction enable $5.5M training runs","/podcast/deepseek-v3-architecture",{"title":597,"_path":598},"SRF Arena part 3: international regulation, student perspectives, and why the debate structure failed","/podcast/srf-arena-final-analysis",{"title":600,"_path":601},"SRF Arena part 2: the EU AI Act, nationalization demands, and Switzerland's supercomputer strategy","/podcast/srf-arena-regulation-debate",{"title":603,"_path":604},"Deconstructing the SRF Arena AI debate: deepfakes, Swiss GPT, and the job displacement argument","/podcast/srf-arena-ai-debate-analysis",{"title":606,"_path":607},"O3-mini: how a smaller model outperforms its predecessor at a fraction of the cost","/podcast/openai-o3-mini",{"title":609,"_path":610},"OpenAI o3: trading compute time for reasoning capability","/podcast/openai-o3",{"title":612,"_path":613},"ChatGPT o1: reasoning breakthroughs and emergent deception","/podcast/chatgpt-o1-manipulation",{"title":615,"_path":616},"When AI kills: autonomous weapons, drone swarms, and predictive policing","/podcast/when-ai-kills",{"title":618,"_path":619},"Google's AI pivot: 25% AI-generated code and 90% cost reduction","/podcast/google-ai-revolution",{"title":621,"_path":622},"Why AI projects fail: a practitioner's guide to implementation","/podcast/ai-project-implementation",{"title":624,"_path":625},"Deep learning explained: from embedding spaces to few-shot learning","/podcast/deep-learning-explained",{"title":627,"_path":628},"Vision AI: why language models need to see, and how Llama 3.2 gets there","/podcast/vision-ai",{"title":630,"_path":631},"BitNets and the road to AGI: on-device inference and Sam Altman's 1000-day prediction","/podcast/bitnets-and-agi",{"title":633,"_path":634},"OpenAI o1 benchmarks and AGI implications: IQ 120, coding breakthroughs, and what they mean","/podcast/openai-o1-technical-analysis",{"title":636,"_path":637},"OpenAI o1 and the mechanics of self-reflection: how 70,000 hidden tokens change inference","/podcast/openai-o1-self-reflection",{"title":639,"_path":640},"AI utopia 2035: when automation funds a renaissance in human agency (part 2 of 2)","/podcast/ai-utopia-2035",{"title":642,"_path":643},"AI dystopia 2035: when AI becomes the lifeblood of the economy (part 1 of 2)","/podcast/ai-dystopia-2035",{"title":645,"_path":646},"AI hype vs. reality: a technical assessment of where things actually stand","/podcast/ai-hype-vs-reality",{"title":648,"_path":649},"Open-source AI: the infrastructure behind the hype","/podcast/open-source-ai",{"title":651,"_path":652},"Is AI intelligent? Why the question matters less than you think","/podcast/is-ai-intelligent",{"title":654,"_path":655},"AI in education: why bans backfire and what actually needs to change","/podcast/ai-in-education",{"title":657,"_path":658},"Bias in AI systems: how 15 people shape the values of a billion-user product","/podcast/bias-in-ai-systems",{"title":660,"_path":661},"AI and the labor market: autonomous agents and the transformation of knowledge work","/podcast/ai-and-the-labor-market",{"title":663,"_path":664},"AI terminology explained: a technical guide beyond the hype","/podcast/ai-terminology-explained",{"title":666,"_path":667},"AI and democratic manipulation: from Cambridge Analytica to language models","/podcast/ai-and-democracy",{"title":669,"_path":670,"children":671,"icon":539},"Appearances","/appearances",[672,675,678,681,684,687,690,693,696,699,702,705,708,711],{"title":673,"_path":674},"AI trends 2025 and predictions for 2026: model convergence, integration, and sovereignty","/appearances/webinar-2025-rewind-2026-outlook",{"title":676,"_path":677},"Swiss AI Impact Forum 2025: live demos of the Swiss AI Hub","/appearances/swiss-ai-impact-forum-2025",{"title":679,"_path":680},"AI trends 2024 and predictions for 2025: a technical analysis","/appearances/webinar-2024-rewind-2025-outlook",{"title":682,"_path":683},"AI as a development partner: tools, techniques, and team integration","/appearances/webinar-ai-development-partner",{"title":685,"_path":686},"Swiss AI Impact Forum: Panel on the future of AI in Switzerland","/appearances/swiss-ai-impact-forum-2024",{"title":688,"_path":689},"AI in knowledge management: keynote at the SWICO event in Zurich","/appearances/swico",{"title":691,"_path":692},"Swiss AI Conference: hands-on workshop on AI agents in the enterprise","/appearances/swiss-ai-conference",{"title":694,"_path":695},"AI trends 2023: milestones and developments in artificial intelligence","/appearances/webinar-2023-rewind",{"title":697,"_path":698},"KI Revolution: AI first how a digital native thinks about generative AI","/appearances/bbv-ki-revolution",{"title":700,"_path":701},"AI agents: the future of enterprise automation","/appearances/netzwoche",{"title":703,"_path":704},"ChatGPT demystified: technical deep dive into large language models","/appearances/webinar-chatgpt-demystified",{"title":706,"_path":707},"Swarm intelligence and AI: the future of enterprise automation","/appearances/webinar-swarm-intelligence",{"title":709,"_path":710},"Polygon Software our journey to an innovative UZH tech startup","/appearances/readme-polygon",{"title":712,"_path":713},"UZH startup label for Polygon Software","/appearances/uzh-startup-label",1775406465741]