freiburg.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
Ein Mastodon-Server für Freiburg und Umland betrieben durch den Verein freiburg.social e.V.: https://wir.freiburg.social

Server stats:

534
active users

Borjan Tchakaloff

@simon I wonder what to make of an `llm chat` (local model) answer that abruptly stops. I got relatively consistent results by following it with "continue", but it stops again and there's an ensuing ping pong match.

Do you know what's happening? Maybe it's related to the token limit?
(In this case it's the Llama 3 8B Instruct.)

@docbibi might be that there's a max_token setting that defaults to a shorter value too, which plugin are you using?

@simon I'm afraid only `llm-gpt4all` (v0.4), `llm` is also up-to-date (V0.13.1).

@simon ah, thanks for figuring it out! Indeed, adding e.g. `-o max_tokens 2000` does help.
I think the problem is also about adding discoverability to the options.