Arthur's Portfolio
  • Home
  • About
  • Works
  • Demos
  • Experiences
  • Skills
  • Publications
  • Education
  • Tags
  • Flutter
  • Langchain
  • Machine Learning
  • Mobile
  • Nlp
  • Rag
Hero Image
PDF Semantic Search

This demo is the interactive app from the RAG v0 work. Upload a PDF, ask a question, and retrieve the most relevant chunks using semantic similarity. See the full write-up in the related work: Building a Basic PDF Semantic Search Engine

  • machine learning
  • nlp
  • langchain
  • rag
Tuesday, March 24, 2026 | 1 minute Read
Hero Image
Sentiment

This demo showcases Sentiment, an on-device journaling experience with local emotion analysis, entry-level labeling, and trend visualization.

  • machine learning
  • nlp
  • mobile
  • flutter
Tuesday, March 24, 2026 | 1 minute Read
Hero Image
Private Journaling with Emotion Analysis

Introduction Hi there! In a previous article, I experimented with running a BERT-based emotion classification model directly on-device using ONNX Runtime in Flutter. (You can catch up on that here!) At the time, I just wanted to see if I could create a smooth local inference experience without relying on external APIs. But a technical demo only goes so far. I started wondering: what would this look like as a real-world tool?

  • machine learning
  • nlp
  • mobile
Friday, March 20, 2026 | 6 minutes Read
Hero Image
Building a Basic PDF Semantic Search Engine

Introduction Hi there! While reading the state of the art of Retrieval Augmented Generation (RAG), I realized that I wanted to experiement with some bleeding edge techniques that I had read about. In order to do so, I needed a simple project to build on top of. While reading Generative AI with LangChain, I thought, “I wish I could ask questions about this book” and I guess that was the spark that ignited this project.

  • machine learning
  • nlp
  • mobile
  • langchain
Tuesday, March 3, 2026 | 11 minutes Read
Hero Image
Running ONNX Models in Flutter

Introduction Hi there! Lately, I’ve been seeing a wave of articles and posts praising lightning-fast GPU inference. And don’t get me wrong, GPUs are great, and I absolutely appreciate a good speed boost as much as the next person. But I also believe a huge chunk of real-world use cases simply don’t need massive models or blazing inference speeds. In fact, for many apps, the ability to run small models fully offline, on the device that’s already in your pocket, provides far more practical value, especially when it comes to privacy.

  • machine learning
  • nlp
  • mobile
Friday, February 13, 2026 | 10 minutes Read
Navigation
  • About
  • Works
  • Demos
  • Experiences
  • Skills
  • Publications
  • Education
Contact me:
  • arthur.queffelec@gmail.com
  • arqueffe
  • Arthur Queffelec
  • Arthur Queffelec

Toha Theme Logo Toha
© 2020 Copyright.
Powered by Hugo Logo