Skip to main content
โ† Back to Liquid Nanos LFM2-350M-ENJP-MT is a specialized translation model for near real-time bidirectional Japanese/English translation. Optimized for short-to-medium text with low latency.

Specifications

PropertyValue
Parameters350M
Context Length32K tokens
TaskMachine Translation
LanguagesEnglish โ†” Japanese

Real-time Translation

Low-latency inference

Bidirectional

ENโ†’JP and JPโ†’EN

Edge Deployment

Compact model size

Prompting Recipe

This model requires a specific system prompt to specify translation direction. Single-turn conversations only.
System Prompts:
  • "Translate to Japanese." โ€” English โ†’ Japanese
  • "Translate to English." โ€” Japanese โ†’ English

Quick Start

Install:
pip install transformers torch
English to Japanese:
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "LiquidAI/LFM2-350M-ENJP-MT"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, device_map="auto")

messages = [
    {"role": "system", "content": "Translate to Japanese."},
    {"role": "user", "content": "What is C. elegans?"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Output: C. elegansใจใฏไฝ•ใงใ™ใ‹๏ผŸ
Japanese to English:
messages = [
    {"role": "system", "content": "Translate to English."},
    {"role": "user", "content": "ไปŠๆ—ฅใฏๅคฉๆฐ—ใŒใ„ใ„ใงใ™ใญใ€‚"}
]

inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
outputs = model.generate(inputs, max_new_tokens=256)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
# Output: The weather is nice today.