API reference
find_themes
async
find_themes(responses_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, target_n_themes: int | None = None, system_prompt: str = CONSULTATION_SYSTEM_PROMPT, verbose: bool = True, concurrency: int = 10) -> dict[str, str | pd.DataFrame]
Process survey responses through a multi-stage theme analysis pipeline.
This pipeline performs sequential analysis steps: 1. Sentiment analysis of responses 2. Initial theme generation 3. Theme condensation (combining similar themes) 4. Theme refinement 5. Theme target alignment (optional, if target_n_themes is specified) 6. Mapping responses to refined themes
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses_df
|
DataFrame
|
DataFrame containing survey responses |
required |
llm
|
RunnableWithFallbacks
|
Language model instance for text analysis |
required |
question
|
str
|
The survey question |
required |
target_n_themes
|
int | None
|
Target number of themes to consolidate to. If None, skip theme target alignment step. Defaults to None. |
None
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
verbose
|
bool
|
Whether to show information messages during processing. Defaults to True. |
True
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
dict[str, str | DataFrame]
|
dict[str, str | pd.DataFrame]: Dictionary containing results from each pipeline stage: - question: The survey question string - sentiment: DataFrame with sentiment analysis results - themes: DataFrame with the final themes output - mapping: DataFrame mapping responses to final themes - unprocessables: Dataframe containing the inputs that could not be processed by the LLM |
Source code in src/themefinder/core.py
22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 |
|
sentiment_analysis
async
sentiment_analysis(responses_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, batch_size: int = 20, prompt_template: str | Path | PromptTemplate = 'sentiment_analysis', system_prompt: str = CONSULTATION_SYSTEM_PROMPT, concurrency: int = 10) -> tuple[pd.DataFrame, pd.DataFrame]
Perform sentiment analysis on survey responses using an LLM.
This function processes survey responses in batches to analyze their sentiment using a language model. It maintains response integrity by checking response IDs.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses_df
|
DataFrame
|
DataFrame containing survey responses to analyze. Must contain 'response_id' and 'response' columns. |
required |
llm
|
RunnableWithFallbacks
|
Language model instance to use for sentiment analysis. |
required |
question
|
str
|
The survey question. |
required |
batch_size
|
int
|
Number of responses to process in each batch. Defaults to 20. |
20
|
prompt_template
|
str | Path | PromptTemplate
|
Template for structuring the prompt to the LLM. Can be a string identifier, path to template file, or PromptTemplate instance. Defaults to "sentiment_analysis". |
'sentiment_analysis'
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
tuple[DataFrame, DataFrame]
|
tuple[pd.DataFrame, pd.DataFrame]: A tuple containing two DataFrames: - The first DataFrame contains the rows that were successfully processed by the LLM - The second DataFrame contains the rows that could not be processed by the LLM |
Note
The function uses integrity_check to ensure responses maintain their original order and association after processing.
Source code in src/themefinder/core.py
theme_generation
async
theme_generation(responses_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, batch_size: int = 50, partition_key: str | None = 'position', prompt_template: str | Path | PromptTemplate = 'theme_generation', system_prompt: str = CONSULTATION_SYSTEM_PROMPT, concurrency: int = 10) -> tuple[pd.DataFrame, pd.DataFrame]
Generate themes from survey responses using an LLM.
This function processes batches of survey responses to identify common themes or topics.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses_df
|
DataFrame
|
DataFrame containing survey responses. Must include 'response_id' and 'response' columns. |
required |
llm
|
RunnableWithFallbacks
|
Language model instance to use for theme generation. |
required |
question
|
str
|
The survey question. |
required |
batch_size
|
int
|
Number of responses to process in each batch. Defaults to 50. |
50
|
partition_key
|
str | None
|
Column name to use for batching related responses together. Defaults to "position" for sentiment-enriched responses, but can be set to None for sequential batching or another column name for different grouping strategies. |
'position'
|
prompt_template
|
str | Path | PromptTemplate
|
Template for structuring the prompt to the LLM. Can be a string identifier, path to template file, or PromptTemplate instance. Defaults to "theme_generation". |
'theme_generation'
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
tuple[DataFrame, DataFrame]
|
tuple[pd.DataFrame, pd.DataFrame]: A tuple containing two DataFrames: - The first DataFrame contains the rows that were successfully processed by the LLM - The second DataFrame contains the rows that could not be processed by the LLM |
Source code in src/themefinder/core.py
theme_condensation
async
theme_condensation(themes_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, batch_size: int = 75, prompt_template: str | Path | PromptTemplate = 'theme_condensation', system_prompt: str = CONSULTATION_SYSTEM_PROMPT, concurrency: int = 10, **kwargs) -> tuple[pd.DataFrame, pd.DataFrame]
Condense and combine similar themes identified from survey responses.
This function processes the initially identified themes to combine similar or overlapping topics into more cohesive, broader categories using an LLM.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
themes_df
|
DataFrame
|
DataFrame containing the initial themes identified from survey responses. |
required |
llm
|
RunnableWithFallbacks
|
Language model instance to use for theme condensation. |
required |
question
|
str
|
The survey question. |
required |
batch_size
|
int
|
Number of themes to process in each batch. Defaults to 100. |
75
|
prompt_template
|
str | Path | PromptTemplate
|
Template for structuring the prompt to the LLM. Can be a string identifier, path to template file, or PromptTemplate instance. Defaults to "theme_condensation". |
'theme_condensation'
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
tuple[DataFrame, DataFrame]
|
tuple[pd.DataFrame, pd.DataFrame]: A tuple containing two DataFrames: - The first DataFrame contains the rows that were successfully processed by the LLM - The second DataFrame contains the rows that could not be processed by the LLM |
Source code in src/themefinder/core.py
theme_refinement
async
theme_refinement(condensed_themes_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, batch_size: int = 10000, prompt_template: str | Path | PromptTemplate = 'theme_refinement', system_prompt: str = CONSULTATION_SYSTEM_PROMPT, concurrency: int = 10) -> tuple[pd.DataFrame, pd.DataFrame]
Refine and standardize condensed themes using an LLM.
This function processes previously condensed themes to create clear, standardized theme descriptions. It also transforms the output format for improved readability by transposing the results into a single-row DataFrame where columns represent individual themes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
condensed_themes
|
DataFrame
|
DataFrame containing the condensed themes from the previous pipeline stage. |
required |
llm
|
RunnableWithFallbacks
|
Language model instance to use for theme refinement. |
required |
question
|
str
|
The survey question. |
required |
batch_size
|
int
|
Number of themes to process in each batch. Defaults to 10000. |
10000
|
prompt_template
|
str | Path | PromptTemplate
|
Template for structuring the prompt to the LLM. Can be a string identifier, path to template file, or PromptTemplate instance. Defaults to "theme_refinement". |
'theme_refinement'
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
tuple[DataFrame, DataFrame]
|
tuple[pd.DataFrame, pd.DataFrame]: A tuple containing two DataFrames: - The first DataFrame contains the rows that were successfully processed by the LLM - The second DataFrame contains the rows that could not be processed by the LLM |
Note
The function adds sequential response_ids to the input DataFrame and transposes the output for improved readability and easier downstream processing.
Source code in src/themefinder/core.py
theme_target_alignment
async
theme_target_alignment(refined_themes_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, target_n_themes: int = 10, batch_size: int = 10000, prompt_template: str | Path | PromptTemplate = 'theme_target_alignment', system_prompt: str = CONSULTATION_SYSTEM_PROMPT, concurrency: int = 10) -> tuple[pd.DataFrame, pd.DataFrame]
Align themes to target number using an LLM.
This function processes refined themes to consolidate them into a target number of distinct categories while preserving all significant details and perspectives. It transforms the output format for improved readability by transposing the results into a single-row DataFrame where columns represent individual themes.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
refined_themes_df
|
DataFrame
|
DataFrame containing the refined themes from the previous pipeline stage. |
required |
llm
|
RunnableWithFallbacks
|
Language model instance to use for theme alignment. |
required |
question
|
str
|
The survey question. |
required |
target_n_themes
|
int
|
Target number of themes to consolidate to. Defaults to 10. |
10
|
batch_size
|
int
|
Number of themes to process in each batch. Defaults to 10000. |
10000
|
prompt_template
|
str | Path | PromptTemplate
|
Template for structuring the prompt to the LLM. Can be a string identifier, path to template file, or PromptTemplate instance. Defaults to "theme_target_alignment". |
'theme_target_alignment'
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
tuple[DataFrame, DataFrame]
|
tuple[pd.DataFrame, pd.DataFrame]: A tuple containing two DataFrames: - The first DataFrame contains the rows that were successfully processed by the LLM - The second DataFrame contains the rows that could not be processed by the LLM |
Note
The function adds sequential response_ids to the input DataFrame and transposes the output for improved readability and easier downstream processing.
Source code in src/themefinder/core.py
theme_mapping
async
theme_mapping(responses_df: pd.DataFrame, llm: RunnableWithFallbacks, question: str, refined_themes_df: pd.DataFrame, batch_size: int = 20, prompt_template: str | Path | PromptTemplate = 'theme_mapping', system_prompt: str = CONSULTATION_SYSTEM_PROMPT, concurrency: int = 10) -> tuple[pd.DataFrame, pd.DataFrame]
Map survey responses to refined themes using an LLM.
This function analyzes each survey response and determines which of the refined themes best matches its content. Multiple themes can be assigned to a single response.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
responses_df
|
DataFrame
|
DataFrame containing survey responses. Must include 'response_id' and 'response' columns. |
required |
llm
|
RunnableWithFallbacks
|
Language model instance to use for theme mapping. |
required |
question
|
str
|
The survey question. |
required |
refined_themes_df
|
DataFrame
|
Single-row DataFrame where each column represents a theme (from theme_refinement stage). |
required |
batch_size
|
int
|
Number of responses to process in each batch. Defaults to 20. |
20
|
prompt_template
|
str | Path | PromptTemplate
|
Template for structuring the prompt to the LLM. Can be a string identifier, path to template file, or PromptTemplate instance. Defaults to "theme_mapping". |
'theme_mapping'
|
system_prompt
|
str
|
System prompt to guide the LLM's behavior. Defaults to CONSULTATION_SYSTEM_PROMPT. |
CONSULTATION_SYSTEM_PROMPT
|
concurrency
|
int
|
Number of concurrent API calls to make. Defaults to 10. |
10
|
Returns:
Type | Description |
---|---|
tuple[DataFrame, DataFrame]
|
tuple[pd.DataFrame, pd.DataFrame]: A tuple containing two DataFrames: - The first DataFrame contains the rows that were successfully processed by the LLM - The second DataFrame contains the rows that could not be processed by the LLM |