I clarify the best way to construct an app that generates a number of alternative questions (MCQs) on any user-defined topic. The app is extracting Wikipedia articles which are associated to the consumer’s request and makes use of RAG to question a chat mannequin to generate the questions.
I’ll reveal how the app works, clarify how Wikipedia articles are retrieved, and present how these are used to invoke a chat mannequin. Subsequent, I clarify the important thing elements of this app in additional element. The code of the app is offered here.
App Demo
The gif above reveals the consumer getting into the educational context, the generated MCQ and the suggestions after the consumer submitted a solution.

On the first display screen the consumer describes the context of the MCQs that ought to be generated. After urgent “Submit Context” the app searches for Wikipedia articles which content material matches the consumer question.

The app splits every Wikipedia web page into sections and scores them primarily based on how intently they match the consumer question. These scores are used to pattern the context of the subsequent query which is displayed within the subsequent display screen with 4 decisions to reply. The consumer can choose a alternative and submit it by “Submit Reply”. It’s also potential to skip this query by way of “Subsequent Query”. On this case it’s thought of that the query didn’t meet the consumer’s expectation. It will likely be averted to make use of the context of this query for the era of following questions. To finish the session the consumer can select “Finish MCQ”.

The subsequent display screen after the consumer submitted a solution reveals if the reply was right and gives an extra rationalization. Following, the consumer can both get a brand new query by way of “Subsequent Query” or finish the session with “Finish MCQ”.

The top session display screen reveals what number of questions have been accurately and wrongly answered. Moreover, it additionally incorporates the variety of questions the consumer rejected by way of “Subsequent Query”. If the consumer selects “Begin New Session” the beginning display screen will probably be displayed the place a brand new context for the subsequent session will be supplied.
Idea
The purpose of this app is to provide prime quality and up-to-date questions on any user-defined matter. Thereby consumer suggestions is taken into account to make sure that the generated questions are assembly the consumer’s expectations.
To retrieve high-quality and up-to-date context, Wikipedia articles are chosen with respect to the consumer’s question. Every article is break up into sections whereas each part is scored primarily based on its similarity with the consumer question. If the consumer rejects a query the respective part rating will probably be downgraded to cut back the probability of sampling this part once more.
This course of will be separated into two workflows:
- Context Retrieval
- Query Era
That are described under.
Context Retrieval
The workflow how the context of the MCQs is derived from Wikipedia primarily based on the consumer question is proven under.

The consumer inserts the question that describes the context of the MCQs initially display screen. An instance of the consumer question could possibly be: “Ask me something about stars and planets”.
To effectively seek for Wikipedia articles this question is transformed into key phrases. The key phrases of the question above are: “Stars”, “Planets”, “Astronomy”, “Photo voltaic System”, and “Galaxy”.
For every key phrase a Wikipedia search is executed of which the highest three pages are chosen. Not every of those 15 pages are a great match to the question supplied by the consumer. To take away irrelevant pages on the earliest potential stage the vector similarity of the embedded consumer question and web page excerpt is calculated. Pages which similarity is under a threshold are filtered out. In our instance 3 of 15 pages have been eliminated.
The remaining pages are learn and divided into sections. As not all the web page content material could also be associated to the consumer question, splitting the pages into sections permits to pick elements of the web page that match particularly nicely to the consumer question. Therefore, for every part the vector similarity towards the consumer question is calculated and sections with low similarity are filtered out. The remaining 12 pages contained 305 sections of which 244 have been saved after filtering.
The final step of the retrieval workflow is to assign a rating to every part with respect to the vector similarity. This rating will later be used to pattern sections for the query era.
Query Era
The workflow to generate a brand new MCQ is proven under:

Step one is to pattern one part with respect to the part scores. The textual content of this part is inserted along with the consumer question right into a immediate to invoke a chat mannequin. The chat mannequin returns a json formatted response that incorporates the query, reply decisions, and an evidence of the right reply. In case the context supplied just isn’t appropriate to generate a MCQ that addresses the consumer question the chat mannequin is instructed to return a key phrase to establish that the query era was not profitable.
If the query era was profitable, the questions and the reply decisions are exhibited to the consumer. As soon as the consumer submits a solution it’s evaluated if the reply was right, and the reason of the right reply is proven. To generate a brand new query the identical workflow is repeated.
In case the query era was not profitable, or the consumer rejected the query by clicking on “Subsequent Query” the rating of the part that was chosen to generate the immediate is downgraded. Therefore, it’s much less doubtless that this part will probably be chosen once more.
Key Elements
Subsequent, I’ll clarify some key elements of the workflows in additional element.
Extracting Wiki Articles
Wikipedia articles are extracted in two steps: First a search is run to seek out appropriate pages. After filtering the search outcomes, the pages separated by sections are learn.
Search requests are despatched to this URL. Moreover, a header containing the requestor’s contact data and a parameter dictionary with the search question and the variety of pages to be returned. The output is in json format that may be transformed to a dictionary. The code under reveals the best way to run the request:
headers = {'Person-Agent': os.getenv('WIKI_USER_AGENT')}
parameters = {'q': search_query, 'restrict': number_of_results}
response = requests.get(WIKI_SEARCH_URL, headers=headers, params=parameters)
page_info = response.json()['pages']
After filtering the search outcomes primarily based on the pages’ excerpts the textual content of the remaining pages is imported utilizing wikipediaapi:
import wikipediaapi
def get_wiki_page_sections_as_dict(page_title, sections_exclude=SECTIONS_EXCLUDE):
wiki_wiki = wikipediaapi.Wikipedia(user_agent=os.getenv('WIKI_USER_AGENT'), language='en')
web page = wiki_wiki.web page(page_title)
if not web page.exists():
return None
def sections_to_dict(sections, parent_titles=[]):
outcome = {'Abstract': web page.abstract}
for part in sections:
if part.title in sections_exclude: proceed
section_title = ": ".be part of(parent_titles + [section.title])
if part.textual content:
outcome[section_title] = part.textual content
outcome.replace(sections_to_dict(part.sections, parent_titles + [section.title]))
return outcome
return sections_to_dict(web page.sections)
To entry Wikipedia articles, the app makes use of wikipediaapi.Wikipedia, which requires a user-agent string for identification. It returns a WikipediaPage object which incorporates a abstract of the web page, web page sections with the title and the textual content of every part. Sections are hierarchically organized that means every part is one other WikipediaPage object with one other listing of sections which are the subsections of the respective part. The operate above reads all sections of a web page and returns a dictionary that maps a concatenation of all part and subsection titles to the respective textual content.
Context Scoring
Sections that match higher to the consumer question ought to get the next chance of being chosen. That is achieved by assigning a rating to every part which is used as weight for sampling the sections. This rating is calculated as follows:
[s_{section}=w_{rejection}s_{rejection}+(1-w_{rejection})s_{sim}]
Every part receives a rating primarily based on two components: how usually it has been rejected, and the way intently its content material matches the consumer question. These scores are mixed right into a weighted sum. The part rejection rating consists of two elements: the variety of how usually the part’s web page has been rejected over the best variety of web page rejections and the variety of this part’s rejections over the best variety of part rejections:
[s_{rejection}=1-frac{1}{2}left( frac{n_{page(s)}}{max_{page}n_{page}} + frac{n_s}{max_{s}n_s} right)]
Immediate Engineering
Immediate engineering is a vital side of the Studying App’s performance. This app is utilizing two prompts to:
- Get key phrases for the wikipedia web page search
- Generate MCQs for sampled context
The template of the key phrase era immediate is proven under:
KEYWORDS_TEMPLATE = """
You are an assistant to generate key phrases to seek for Wikipedia articles that include content material the consumer desires to be taught.
For a given consumer question return at most {n_keywords} key phrases. Be sure each key phrase is an efficient match to the consumer question.
Reasonably present fewer key phrases than key phrases which are much less related.
Directions:
- Return the key phrases separated by commas
- Don't return the rest
"""
This technique message is concatenated with a human message containing the consumer question to invoke the Llm mannequin.
The parameter n_keywords
set the utmost variety of key phrases to be generated. The directions make sure that the response will be simply transformed to an inventory of key phrases. Regardless of these directions, the LLM usually returns the utmost variety of key phrases, together with some much less related ones.
The MCQ immediate incorporates the sampled part and invokes the chat mannequin to reply with a query, reply decisions, and an evidence of the right reply in a machine-readable format.
MCQ_TEMPLATE = """
You're a studying app that generates multiple-choice questions primarily based on academic content material. The consumer supplied the
following request to outline the educational content material:
"{user_query}"
Based mostly on the consumer request, following context was retrieved:
"{context}"
Generate a multiple-choice query instantly primarily based on the supplied context. The right reply should be explicitly acknowledged
within the context and will at all times be the primary choice within the decisions listing. Moreover, present an evidence for why
the right reply is right.
Variety of reply decisions: {n_choices}
{previous_questions}{rejected_questions}
The JSON output ought to observe this construction (for variety of decisions = 4):
{{"query": "Your generated query primarily based on the context", "decisions": ["Correct answer (this must be the first choice)","Distractor 1","Distractor 2","Distractor 3"], "rationalization": "A quick rationalization of why the right reply is right."}}
Directions:
- Generate one multiple-choice query strictly primarily based on the context.
- Present precisely {n_choices} reply decisions, making certain the primary one is the right reply.
- Embody a concise rationalization of why the right reply is right.
- Don't return the rest than the json output.
- The supplied rationalization shouldn't assume the consumer is conscious of the context. Keep away from formulations like "As acknowledged within the textual content...".
- The response should be machine readable and never include line breaks.
- Verify whether it is potential to generate a query primarily based on the supplied context that's aligned with the consumer request. If it's not potential set the generated query to "{fail_keyword}".
"""
The inserted parameters are:
user_query
: textual content of consumer questioncontext
: textual content of sampled partn_choices
: variety of reply decisionsprevious_questions
: instruction to not repeat earlier questions with listing of all earlier questionsrejected_questions
: instruction to keep away from questions of comparable nature or context with listing of rejected questionsfail_keyword
: key phrase that signifies that query couldn’t be generated
Together with earlier questions reduces the prospect that the chat mannequin repeats questions. Moreover, by offering rejected questions, the consumer’s suggestions is taken into account when producing new questions. The instance ought to make sure that the generated output is within the right format in order that it may be simply transformed to a dictionary. Setting the right reply as the primary alternative avoids requiring an extra output that signifies the right reply. When displaying the alternatives to the consumer the order of decisions is shuffled. The final instruction defines what output ought to be supplied in case it’s not potential to generate a query matching the consumer question. Utilizing a standardized key phrase makes it straightforward to establish when the query era has failed.
Streamlit App
The app is constructed utilizing Streamlit, an open-source app framework in Python. Streamlit has many capabilities that enable so as to add web page parts with just one line of code. Like for instance the aspect wherein the consumer can write the question is created by way of:
context_text = st.text_area("Enter the context for MCQ questions:")
the place context_text
incorporates the string, the consumer has written. Buttons are created with st.button
or st.radio
the place the returned variable incorporates the data if the button has been pressed or what worth has been chosen.
The web page is generated top-down by a script that defines every aspect sequentially. Each time the consumer is interacting with the web page, e.g. by clicking on a button the script will be re-run with st.rerun()
. When re-running the script, you will need to carry over data from the earlier run. That is executed by st.session_state
which might include any objects. For instance, the MCQ generator occasion is assigned to session states as:
st.session_state.mcq_generator = MCQGenerator()
in order that when the context retrieval workflow has been executed, the discovered context is offered to generate a MCQ on the subsequent web page.
Enhancements
There are lots of choices to boost this app. Past Wikipedia, customers may additionally add their very own PDFs to generate questions from customized supplies—comparable to lecture slides or textbooks. This may allow the consumer to generate questions on any context, for instance it could possibly be used to organize for exams by importing course supplies.
One other side that could possibly be improved is to optimize the context choice to reduce the variety of rejected questions by the consumer. As a substitute of updating scores, additionally a ML mannequin could possibly be skilled to foretell how doubtless it’s {that a} query will probably be rejected with respect to options like similarity to accepted and rejected questions. Each time one other query is rejected this mannequin could possibly be retrained.
Additionally, the generated query could possibly be saved in order that when a consumer desires to repeat the educational train these questions could possibly be used once more. An algorithm could possibly be utilized to pick beforehand wrongly answered questions extra steadily to deal with bettering the learner’s weaknesses.
Abstract
This text showcases how retrieval-augmented era (RAG) can be utilized to construct an interactive studying app that generates high-quality, context-specific multiple-choice questions from Wikipedia articles. By combining keyword-based search, semantic filtering, immediate engineering, and a feedback-driven scoring system, the app dynamically adapts to consumer preferences and studying objectives. Leveraging instruments like Streamlit permits speedy prototyping and deployment, making this an accessible framework for educators, college students, and builders alike. With additional enhancements—comparable to customized doc uploads, adaptive query sequencing, and machine learning-based rejection prediction—the app holds robust potential as a flexible platform for customized studying and self-assessment.
Additional Studying
To be taught extra about RAGs I can advocate these articles from Shaw Talebi and Avishek Biswas. Harrison Hoffman wrote two wonderful tutorials on embeddings and vector databases and building an LLM RAG Chatbot. Easy methods to handle states in streamlit will be present in Baertschi’s article.
If not acknowledged in any other case, all pictures have been created by the writer.