First of all i think it is a great idea to give the model access to a map. Unfortunately it seems like, that the script is missing a huge part at the end, the loop does not have any content and the Tools class is missing.
lynx
I have found the problem with the cut off, by default aider only sends 2048 tokens to ollama, this is why i have not noticed it anywhere else except for coding.
When running /tokens
in aider:
$ 0.0000 16,836 tokens total
15,932 tokens remaining in context window
32,768 tokens max context window size
Even though it will only send 2048 tokens to ollama.
To fix it i needed to add a file .aider.model.settings.yml
to the repository:
- name: aider/extra_params
extra_params:
num_ctx: 32768
If you want in line completions, you need a model that is trained on "fill in the middle" tasks. On their Huggingface page they even say that this is not supported and needs fine tuning:
We do not recommend using base language models for conversations. Instead, you can apply post-training, e.g., SFT, RLHF, continued pretraining, etc., or fill in the middle tasks on this model.
A model that can do it is:
- starcoder2
- codegemma
- codellama
Another option is to just use the qwen model, but instead of only adding a few lines let it rewrite the entire function each time.
Split Horizon with Poison Reverse
This is probably the only reason microsoft recall exists, as it is completely useless for anything else.
The --rotate normal,inverted,left,right
does not work, but you can use the transform option to achieve the same effect.
To create the transformation matrix you can use something like: https://angrytools.com/css-generator/transform/
- for translateXY enter half the screen resolution
- don't copy the generated code, it has the numbers in the wrong order just type out the matrix row wise.
The final command looks like this:
xrandr --output screen-1 --transform 0.87,-0.50,960,0.50,0.87,540,0,0,1
To restore the original use (type this in first, because if you screw up you might not be able to see anything anymore):
xrandr --output screen-1 --transform 1,0,0,0,1,0,0,0,1
I tested it on x11.
How can you do fractional rotation? Does it only work with x11 or is it also supported in wayland?
Here is a gray scale version of the image with better contrast.
Thanks for suggesting RNote, i always use Xournal++ to take notes, but there are some problems and RNote seems to work much nicer with gestures. The only thing that i am missing is an option for saving pen configuration to easily switch between a black pen and a yellow marker.
I dont know what you mean with steering?
First of all, have you tried giving the model multiple examples of input output pairs in the context, this already helps the model a lot to output the correct format.
Second you can force a specific output structure by using a regex or grammar: https://python.langchain.com/docs/integrations/chat/outlines/#constrained-generation https://github.com/ggerganov/llama.cpp/blob/master/grammars/README.md
And third, in case you want to train a model to respond differently and the previous steps were not good enough, you can fine-tune. I can recommend this project to you, as it teaches how to fine-tune a model: https://github.com/huggingface/smol-course
Depending on the size of the model, that you want to fine-tune and the amount of compute that you have available you can either train by updating all parameters like ORPO or you can train via PEFT (LoRA)