Skip to content

feat(config): expose embedding.max_concurrent and vlm.max_concurrent …#282

Merged
MaojiaSheng merged 1 commit intovolcengine:mainfrom
yangxinxin-7:feat/config_concurrent
Feb 25, 2026
Merged

feat(config): expose embedding.max_concurrent and vlm.max_concurrent …#282
MaojiaSheng merged 1 commit intovolcengine:mainfrom
yangxinxin-7:feat/config_concurrent

Conversation

@yangxinxin-7
Copy link
Collaborator

  • Add max_concurrent field to VLMConfig (default: 100) to control
    concurrent LLM calls inside SemanticProcessor
  • Change EmbeddingConfig.max_concurrent default from 1 to 10
  • Thread both values through core._init_storage → init_queue_manager
    → QueueManager (_max_concurrent_embedding / _max_concurrent_semantic)
    → SemanticProcessor(max_concurrent_llm)
  • Update README and README_CN with the new config fields and defaults

@CLAassistant
Copy link

CLAassistant commented Feb 25, 2026

CLA assistant check
All committers have signed the CLA.

@yangxinxin-7 yangxinxin-7 force-pushed the feat/config_concurrent branch from 86d1af7 to b1c4313 Compare February 25, 2026 09:32
@MaojiaSheng MaojiaSheng merged commit 7c6bebe into volcengine:main Feb 25, 2026
6 checks passed
@github-project-automation github-project-automation bot moved this from Backlog to Done in OpenViking project Feb 25, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

Status: Done

Development

Successfully merging this pull request may close these issues.

3 participants