Qulto Professional Day 2026 – Summary
2026. March 04.
//= get_the_post_thumbnail_url(); ?>
Repository, RAG, AI – how is access to knowledge changing?
Repositories are the “fuel” of AI, but only if they are refined.
On February 25, 2026, we held our next Qulto Professional Day, the central theme of which was repository developments and the advancement of AI.
Presentations
The professional day was opened by the year-evaluation and greeting of Miklós Czoboly, CEO of Qulto, in which he outlined the company’s developments over the past year and its future strategic directions. Following this, representatives of the Metropolitan Ervin Szabó Library (FSZEK) presented the Budapest Electronic Archive, illustrating how local memory can be preserved in the digital space, involving the local community as well. In the series of technological innovations, a prominent role was given to the new Discovery interface and the OPAC Dashboard, which are intended to increase the efficiency of individual research work and user experience with various lists, workbooks, and integrations..
In the second half of the program, specific content service solutions came to the fore, such as ELTE’s DSpace-based course material repository, where reusable content for instructional work (with AI integration) became a priority goal. In addition, we could learn details about the creation of the Baptist Knowledge Base as a repository, which is intended to collect sources, manuscripts, contents, and catalogs scattered around the world and provide them through specific portals.
In the technical sections, Qulto’s specialists presented the latest developments of the integrated library system (IKR), including electronic document access. One of the most current presentations of the day was about the practical application of artificial intelligence in data migration processes, demonstrating how AI is able to relieve professionals from monotonous workflows or perfect the result.
The professional presentations can be accessed on the FSZEK YouTube channel.
Panel Discussion
We explored the topic of the panel discussion – “Synthesis instead of search – How AI is rewriting access to knowledge?” – with László Balázs (FSZEK), Zsolt Bánki (MNL), László Nemes (ELTE), and István Szekrényes (DH-LAB, DE). The starting premise is that without a structured dataset, AI is only a blind tool, therefore public collection databases and repositories that can be considered authentic sources are crucial. But what can make a database a flexible knowledge base, how far are we from a semantic turn in the foyer of AI, and do we even still need to translate existing catalogs and collections into semantic databases? AI would save us huge human resources in this, while the new technology could even comb through different sources at any time in real-time, so we might not even need a separate national catalog anymore, which is of course an extreme end (LB)..
The obsolescence of the MARC standard was mentioned, but at the same time, the conclusion was that there is currently no better or more suitable standard. If, nevertheless, AI performs the transcription and data enrichment of MARC records, would we dare to let it write into the catalog automatically without human validation, and what will happen to professional credibility then? It is not the goal for the librarian to become a kind of model-checker, but the future is hard to predict, how everything will transform in the profession, the discussion partners believed. Zsolt Bánki emphasized that it is not the job of public collections to decide every professional question, nor to validate AI results one by one, for example, but they must at least provide the data with a marker if it comes from AI and put a confidence level next to it.
The vectorization of databases led to the technology of RAG (Retrieval Augmented Generation), i.e., source-based answer generation, which is a kind of “leash” for AI, as it can only search for answers in the knowledge base specified by us, excluding hallucinations. However, they did not state firmly that the marriage of the semantic database and vector-based search could even be the solution, a kind of holy grail. What they did state, however, was that not only has the time come, but it would have been time long ago to teach semantic modeling alongside cataloging in library-informatics education.
The advantage of RAG + AI technology is the preservation of authenticity, but the disadvantage is that it also limits; the question arose whether it is worth it for public collections to build small language models trained on their own, closed repository data (such as the DH-LAB handwriting recognition project), or should we give our structured collections to a tech giant if, in exchange, readers and researchers would get, for example, a perfect search engine. There is no clear yes/no answer; both can have advantages, but it was stated that every institution must make a clear decision on this in the near future.
If we approach the topic from the perspective of users, the question is whether our search engines will transform. According to an impromptu “survey” measured among the audience, we are moving toward search engines transforming from complex search fields and filters toward a simple chat window direction. It was mentioned that the desirable development could be a kind of hybrid version, with an AI assistant (LB), i.e., the solution already known at Google: an AI-generated summary in the first place and the traditional hit list below it. Therefore, it is necessary to keep the traditional keyword search and put the new generation search function next to it, when the reader or researcher can chat with the knowledge base.
The present, however, is that for now, the first public collection AI projects are starting, which aim at the processing and making visible of collections and repositories, and not necessarily the serving of end-user convenience. Public collections preserve cultural heritage for the future, and currently, their job is to pass it on in such a way and form that future generations will also be able to interpret it.
A question from the audience also highlighted the generational difference in attitude: search fields or chat windows, both initiate a text search, but what about searching with voice and images? For Generation Alpha, seeking information is not necessarily an intentional process, but rather a multimodal experience. The technology is given; the challenge is much more bringing collection data into a “ready-to-speak” format.
The repository presentations and the discussion of the professional day highlight well that public collections represent a secure, structured data foundation (even in the sometimes-labeled-obsolete MARC), on which a vector-based, AI-based search layer can be built; this combination serves both the flexibility of databases and user convenience. The only question is, considering that the capability of artificial intelligence multiplies twenty-six times annually, if we sit down at a round table in the same way in 2 years, what will we be talking about?

















