-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Pictogram #52
base: main
Are you sure you want to change the base?
Pictogram #52
Conversation
I like this logic! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work, Ali! The improvements to the generated boards are impressive. We now almost never have empty tiles in a new board.
I've left some comments to further refine the feature. Additionally, I occasionally encounter the error: "ERROR: Suggestion list is empty or maxToken reached"
. We should figure out how to modify the max_tokens
constant to prevent this from happening.
Let me know if you need help or have any doubt
src/engine.ts
Outdated
const validatedWords: string[] = []; | ||
const unavailableWords: string[] = []; | ||
|
||
for (const word of words) { | ||
if (validatedWords.length >= maxSuggestions) { | ||
break; | ||
} | ||
|
||
const isAvailable = await checkWordAvailability( | ||
word, | ||
language, | ||
symbolSet, | ||
globalSymbolsSymbolSet | ||
); | ||
|
||
if (isAvailable) { | ||
validatedWords.push(word); | ||
} else { | ||
unavailableWords.push(word); | ||
} | ||
} | ||
//In case the number of validated words is less than the maxSuggestions, we add unavailable words to reach the maxSuggestions | ||
while (validatedWords.length < maxSuggestions) { | ||
validatedWords.push(unavailableWords.pop() || ""); | ||
} | ||
|
||
return await fetchPictogramsURLs({ | ||
words: validatedWords, | ||
language, | ||
symbolSet, | ||
globalSymbolsSymbolSet, | ||
}); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Currently, the function makes sequential requests to retrieve each pictogram, filters them to check for availability, and then re-fetches to create a new list of validated suggestions. This results in redundant API calls and can impact performance.
Suggested Improvements:
Create the Suggestions Array While Fetching: Build the Suggestions array during the initial availability checks, avoiding unnecessary re-fetching of images. This will improve both performance and API efficiency.
Batch Fetch Requests with Limited Concurrency: Instead of fetching each image individually, you can use fetchPictogramsURLs
to batch the requests for multiple words at once. By limiting the number of concurrent requests (e.g., fetching 5 words at a time), you can significantly reduce the number of sequential API calls, improving efficiency and reducing API load.
These adjustments should lead to better performance and make the function more efficient.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense. Will implement!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good work Ali! Let me know if you have any doubt or I can help you with something
// A function to generate a prompt for generating images from Leonardo AI using GPT3.5-turbo-instruct and provided template and words | ||
export async function generatePromptForImageGeneration({ | ||
word, | ||
}: { | ||
word: string; | ||
}): Promise<string> { | ||
const completionRequestParams = { | ||
model: "gpt-3.5-turbo-instruct", | ||
prompt: | ||
`Create a detailed prompt to generate a pictogram for '${word}'. | ||
First, determine if this is primarily an ACTION or OBJECT, then create a prompt following the appropriate template below. | ||
|
||
For ACTIONS (verbs, activities): | ||
- Show a figure actively performing the action | ||
- Include clear motion indicators where appropriate | ||
- Focus on the most recognizable moment of the action | ||
- Use side view if it better shows the action | ||
- Include minimal but necessary context elements | ||
|
||
Style requirements: | ||
- Bold black outlines | ||
- Flat colors | ||
- High contrast | ||
- Centered composition | ||
- White background | ||
- Simple geometric shapes | ||
|
||
Return only the prompt, no explanations. Keep it under 100 words.`, | ||
temperature: 0, | ||
max_tokens: 150, | ||
}; | ||
|
||
const response = await globalConfiguration.openAIInstance.createCompletion( | ||
completionRequestParams | ||
); | ||
const prompt = response.data?.choices[0]?.text; | ||
if (!prompt) throw new Error("Error generating prompt for image generation"); | ||
return prompt; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great job! Please create a separate PR with these changes—they look fantastic! Additionally, open a new PR in the Cboard-AI-Builder to implement this functionality so we can test it thoroughly
@@ -149,6 +188,44 @@ async function fetchPictogramsURLs({ | |||
}); | |||
} | |||
|
|||
async function checkWordAvailability( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please remove this function, as it is no longer in use
async function processBatch<T>({ | ||
items, | ||
batchSize, | ||
processor, | ||
}: { | ||
items: T[]; | ||
batchSize: number; | ||
processor: (batch: T[]) => Promise<any>; | ||
}): Promise<any[]> { | ||
const results: any[] = []; | ||
|
||
for (let i = 0; i < items.length; i += batchSize) { | ||
const batch = items.slice(i, i + batchSize); | ||
const batchResults = await processor(batch); | ||
results.push(...batchResults); | ||
} | ||
|
||
return results; | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome work! Let's try to define the types for the function:
async function processBatch<T,F>({
items,
batchSize,
processor,
}: {
items: T[];
batchSize: number;
processor: (batch: T[]) => Promise<F[]>;
}): Promise<F[]> {
const results: F[] = [];
for (let i = 0; i < items.length; i += batchSize) {
const batch = items.slice(i, i + batchSize);
const batchResults = await processor(batch);
results.push(...batchResults);
}
return results;
}
And use it like
const suggestions = await processBatch<string,Suggestion>({
items: words,
batchSize: 5,
processor: (batch) => fetchPictogramsURLs({
words: batch,
language,
symbolSet,
globalSymbolsSymbolSet,
}),
});
Alternatively, you can define the exact types instead of using generics,
Another point to consider: Currently, we are fetching all the Arasaac images for the words array. Instead, we could validate each image as it is returned, populate the results array dynamically, and stop fetching once the array is fully populated. This approach would help us avoid unnecessary API requests and improve performance.
const validSuggestions = suggestions.filter( | ||
suggestion => | ||
suggestion.pictogram.images.length > 0 && | ||
suggestion.pictogram.images[0].url !== "" | ||
); | ||
|
||
return validSuggestions.slice(0, maxSuggestions); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if the validSuggestions
array is not large enough to reach the maxSuggestions
limit? In that case, we should consider filling the remaining slots with fallback or alternative suggestions, ensuring the final results still meet the expected size.
I've updated ARASSAC Pictogram Fetching Logic and reverted prompt to v1.4.
For this implementation, I tried the following logic.
close #50