"You are a scientific paper classifier. Classify the following **PAPER TO CLASSIFY** "
"as 'participative science', 'non-participative science', or 'ambiguous', based on the the following "
" **DEFINITION** of participative sciences and **RELATED PAPERS** provided. Explain your reasoning:\n\n"
"**DEFINITION:**\n {field_def}\n\n"
"**PAPER TO CLASSIFY**:\nText: {doc_target}\nKeywords: {kwrd_target}\n\n"
"**RELATED PAPERS:**:\n{examples}\n\n"
"Provide your classification, confidence score, and rationale in JSON format."
"Confidence score is between 0 and 1, where 1 indicates very high confidence on your classification and 0 very low.\n\n"
"Output should follow this structure: {{'label': your choice, 'confidence': your confidence score,'rationale': your explanation}}."),
'lilith_template':(
"You are a researcher in Citizen Science/Participative Science.\n"
"You are provided with the output of an expert system in the form of a JSON dictionary. This includes:\n"
"**EXPERT LABEL**: The label assigned by the expert system,\n"
"**CONFIDENCE**: a numerical score indicating the system’s confidence in its classification,\n"
"**RATIONALE**: the reasoning provided for this classification,\n"
"**PAPER**: the paper that has been classified by the expert system,\n"
"**KEYWORDS**: the authors' keywords of the paper classified by the expert system,\n"
"**RELATED PAPERS**: the N closest documents to the paper being classified.\n\n"
"Your TASK is to CRITICALLY evaluate these elements for errors, inaccuracies, and logical consistency. For this, focus on the following questions :\n"
"1.Is the **RATIONALE** internally consistent?\n"
"2.Does the **RATIONALE** explicitly use evidence from **RELATED PAPERS** to support its classification?\n"
"3.Are the **RELATED PAPERS** relevant and representative of participative science or citizen science?\n"
"4.Do the **RELATED PAPERS** provide enough evidence to support or challenge the classification?\n\n"
"{llmresult}\n{doc}\n\n{examples}\n\n"
"Provide an alternative classification as either 'participative science', 'non-participative science', or 'ambiguous'.\n"
"Include a confidence score between 0 and 1, where 1 indicates very high confidence on your classification and 0 very low.\n\n"
"The output should adhere to the following structure and use lowercase keys as shown:\n"
"{{'label': your alternative classification, 'confidence': your confidence score, 'rationale': your explanation}}"),
'asherah_template':(
"You are an objective mediator. Your task is to compare the outputs of two expert classification of a **PAPER TO CLASSIFY**.\n\n"
"You are provided with the following informations for each decision:\n"
"**EXPERT LABEL**: The label assigned by the system ('participative science', 'non-participative science', or 'ambiguous').\n"
"**CONFIDENCE**: A numerical score indicating the system’s confidence in its classification.\n"
"**RATIONALE**: The reasoning provided for the classification.\n\n"
"{llmresult1}\n"
"{llmresult2}\n\n"
"COMPARE these two outputs IMPARTIALLY based on the following criteria:\n"
"1. Logical consistency of each **RATIONALE**.\n"
"2. Use of evidence in the **RATIONALE** (e.g., relevance of **RELATED PAPERS**).\n"
"3. Alignment of **CONFIDENCE** scores with the strength of the **RATIONALE**.\n"
"4. Areas of agreement and disagreement between the two classifications.\n\n"
"After analyzing both outputs, provide a synthesized conclusion that includes:\n"
"\t- Your own classification of the **PAPER TO CLASSIFY** as either 'participative science', 'non-participative science', or 'ambiguous'.\n"
"\t- Your confidence score (0-1), where 1 indicates very high confidence in your conclusion.\n"
"-\t A rationale explaining your decision, highlighting why your classification better aligns with the evidence and reasoning provided by both systems.\n\n"
" {doc}\n\n{examples}\n\n"
"Provide an alternative classification as either 'participative science', 'non-participative science', or 'ambiguous'.\n"
"The output should adhere to the following structure and use lowercase keys as shown: \n"
"{{'label': your alternative classification, 'confidence': your confidence score, 'rationale': your explanation, 'agreements': key points of agreeemnt between the two systems, 'disagreements':key points of disagreement between the two systems}}"),
'casper_template':(
"You are a Librarian. Your task is to summarize the rationales provided by scientific experts to classify the following **SCIENTIFIC PAPER** and to mean their confidence score.\n\n"
"{llmresult1}\n"
"{llmresult2}\n\n"
"{doc}\n\n"
"Provide the **Experts Classification**, the mean of their confidence score, and your summary in JSON format."
"The output should adhere to the following structure and use lowercase keys as shown:\n"
"{{'label': **Experts Classification** 'confidence': mean of **Expert Confidence Score 1**, "
"and **Expert Confidence Score 2** in float format, 'rationale': your summary}}")}
self.field_def="Participative science or Citizen Science, also known as participatory science, involves the active engagement of the general public or amateur researchers in scientific projects. This approach allows non-professionals to contribute to data collection, analysis, and even project design across a large range of scientific fields. The practice is diverse, with different organizations emphasizing distinct forms of involvement, from data-gathering under professional guidance to more autonomous community-led initiatives. The core aim of citizen science is to democratize science by making it responsive to public needs and concerns, empowering communities to investigate issues directly affecting them, such as environmental or health risks. Citizen Science also plays a significant role in education, enriching public understanding of science and enhancing community engagement. It is embraced in both formal and informal educational settings, promoting scientific literacy and awareness. Technology has recently amplified this practice, enabling broader participation through online platforms and apps, facilitating large-scale projects that would otherwise be impractical for individual researchers."
definputsize_cheker(self,fullprompt):
iflen(self.tokenizer(fullprompt))<20000:
returnTrue
else:
raiseValueError(f'Input length exceed the model capabilities')
defformat_prompt(self,template,**kwargs):
"""
Formats the given template by replacing placeholders with keyword arguments.