Maybe this is my "get off my lawnβ moment, but sending an unstructured LLM text request that includes:
"The output should be formatted as a JSON instance that conforms to the JSON schema below.β
followed by a markdown-escaped partial JSONSchema doesn't spark joy.
Mastodon Source π
API compatibility of `void* oracle(void*)` likely good news for the SRE world.
Mastodon Source π
I'm using llama2 from https://ollama.com/ (to avoid the OpenAI r/t) and only changing the parameterized prompt. My (term, description) pairs are in YAML and serialized the same.
The response is either a JSONSchema envelope with the found results in the `properties.<foo>` element, or at the root in a `.<foo>` element. Both responses include additional explanatory text. Submitting both in a prompt request produces another response format.
I'm guessing this is expected
Going outside now