Skip to content

Conversation

@zamanafzal
Copy link
Contributor

@zamanafzal zamanafzal commented Dec 23, 2025

What are the relevant tickets?

https://github.com/mitodl/hq/issues/9667

Description (What does it do?)

This PR adds automated translation management functionality to the ol_openedx_course_translations plugin. The system syncs translation keys from English to target language files, translates empty keys using LLM APIs, and automatically creates pull requests in the mitxonline-translations repository.

Relevant Test Pull Requests:
https://github.com/zamanafzal/mitxonline-translations/pulls

How can this be tested?

Ensure you're in the Open edX environment (Tutor container)

Checkout this branch for open-edx-plugins.
tutor dev exec lms bash
export the LLM key you want to use. e.g MISTRAL
export MISTRAL_API_KEY="KEY"
Generate a github token which will be used to authenticate with github and pushing changes.
export GITHUB_TOKEN ="token"
./manage.py sync_and_translate_language {lan_name} --model {model_name}

For example:

 ./manage.py lms sync_and_translate_language ar --model gemini/gemini-3-pro-preview
./manage.py lms sync_and_translate_language ar  --model mistral/mistral-small-latest

It will translate the missing keys and generate files if don't exist and create a PR.

@zamanafzal zamanafzal marked this pull request as ready for review December 31, 2025 11:35
@zamanafzal zamanafzal force-pushed the zafzal/enable-synce-and-trans-9667 branch from 2c27956 to fc7818b Compare December 31, 2025 13:58
@zamanafzal zamanafzal force-pushed the zafzal/enable-synce-and-trans-9667 branch from bb23480 to 744c94c Compare January 1, 2026 10:08
@asadali145
Copy link
Contributor

As discussed over the call, Mistral needs some changes on how default model is set. It is not working when we remove mistral/ from the model name. Also, gemini is not working.

Copy link
Contributor

@asadali145 asadali145 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!

if provider == "gemini" and not model.startswith(("gemini/", "vertex_ai/")):
completion_kwargs["model"] = f"gemini/{model}"
if provider == "gemini":
if not model.startswith(("gemini/", "vertex_ai/")):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are we including vertex_ai here?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LiteLLM allows access to Gemini in two ways:

Gemini API (prefix: gemini/)

Vertex AI (prefix: vertex_ai/)

For example, the model vertex_ai/gemini-3-pro-preview uses Vertex AI. If only gemini-3-pro-preview is provided, LiteLLM may default to using Vertex AI.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The author's explanation is correct. LiteLLM, the library used here, supports accessing Gemini models through both the native Gemini API (prefixed with gemini/) and Google Cloud's Vertex AI (prefixed with vertex_ai/). The code ensures that if a Gemini model is specified without either of these prefixes, it defaults to the gemini/ prefix for consistency with LiteLLM's expected format, while still allowing explicitly specified vertex_ai/ models to pass through unchanged.

if provider == "gemini":
if not model.startswith(("gemini/", "vertex_ai/")):
completion_kwargs["model"] = f"gemini/{model}"
# Gemini 3 models require temperature = 1.0 to avoid issues
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we explain the issues here?

@zamanafzal zamanafzal merged commit 2d8ffc5 into main Jan 6, 2026
9 checks passed
@zamanafzal zamanafzal deleted the zafzal/enable-synce-and-trans-9667 branch January 6, 2026 11:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants