Forums

Skip to content

Advanced search
  • Quick links
    • Unanswered topics
    • Active topics
    • Search
  • FAQ
  • Login
  • Register
  • Board index Assistance Portage & Programming
  • Search

ollama code generation experiment

Problems with emerge or ebuilds? Have a basic programming question about C, PHP, Perl, BASH or something else?
Post Reply
Advanced search
35 posts
  • 1
  • 2
  • Next
Author
Message
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

ollama code generation experiment

  • Quote

Post by Goverp » Sat Dec 27, 2025 2:30 pm

I spent some of Boxing day playing with ollama.
First off, it worked on my floor-standing PC: Zen3 cpu (5950X), 32 GB memory, RX570 CPU (4 GB) using mesa/rusticl and not ROCM. The ollama docs say it needs ROCM, but no, it seems to work fine with rusticl - btop showed my GPU and CPUs working hard (generating useful warmth on a cold day).

I downloaded a couple of LLMs - codellama:34b - 19 GB of model supposedly optimized for code generation, from Meta IIUC, and qwen3:14b - 9 GB model optimized for code translation and the like, developed by Alibaba Cloud.

First attempt was with the larger codellama, given the task
write a java class that handles sub-arrays of a byte array like strings and substrings
- literally just that, no preamble, just fire up ollama and reply to the prompt with that. It reasonably quickly wrote a fairly small Java class which met the stated requirement but no more.

Second attempt with qwen3, same task. Qwen3 ships with thinking mode output enabled, so there was a lot of explanation, all of it very sensible, and the resulting Java class had more function, and was properly commented, including warnings about the tradeoff of performance against safety (the point of this particular way of implementing substrings/subarrays is that the are effectively a base-limit pair into an existing data structure, so the subarray may reflect changes to the underlying storage), and test cases.

I then gave it a further task:
write a Rust version of that program
IMHO Qwen3 excelled - it spotted that Rust has a "slice" entity that actually does what I want. Rather than say "stoopid pointy-haired boss doesn't know Rust" (which is right) it decided that perhaps I wanted something that reflected the Java API it had generated, but using Rust and slices.

Performance of the qwen3 model was to produce output about as fast as I could read and understand - perhaps 2 tokens per second, maybe more in the thinking phase; it was like watching someone think and type at the same time. Personally, if I'd been trying to recruit a programmer, I'd have given qwen3 the job immediately and sent the rest home - it amazed me.

I could post the output, but I'm not sure if that would contravene the Gentoo policy on AI code.

I did a quick check with Google to see if it was merely regurgitating a discussion on stackexchange or similar, but it appears not. The nearest is some code in Google's "protocol buffers", but that does not appear to have the discussion of "why do it this way" reflected in qwen3's thinking.
Greybeard
Top
pingtoo
Advocate
Advocate
User avatar
Posts: 2180
Joined: Fri Sep 10, 2021 8:37 pm
Location: Richmond Hill, Canada

  • Quote

Post by pingtoo » Sat Dec 27, 2025 3:08 pm

I am curious does same questions produce exact same results? And with multiple iteration will it come out version that better than previous version? This second question also trying to understand if the model understand context, As does it remember previous conversation.

I am not sure that I understand the idea behind not accept AI generated program. Can you ask AI to write a program that embedded with virus logic?
Top
NeddySeagoon
Administrator
Administrator
User avatar
Posts: 56081
Joined: Sat Jul 05, 2003 9:37 am
Location: 56N 3W

  • Quote

Post by NeddySeagoon » Sat Dec 27, 2025 3:53 pm

pingtoo,
I am not sure that I understand the idea behind not accept AI generated program.
Who owns the copyright on AI output?
The copyright holder determines the licence.

Not you, you only asked the question, it is not your original work.
Then it gets murky ...
The AI owner, ... the people who's original work the AI was trained on ... and so on.

To my knowledge, it's not been decided by any court yet.

Better to not accept AI output now than to have to remove it all later.
Regards,

NeddySeagoon

Computer users fall into two groups:-
those that do backups
those that have never had a hard drive fail.
Top
pingtoo
Advocate
Advocate
User avatar
Posts: 2180
Joined: Fri Sep 10, 2021 8:37 pm
Location: Richmond Hill, Canada

  • Quote

Post by pingtoo » Sat Dec 27, 2025 4:20 pm

NeddySeagoon wrote:pingtoo,
I am not sure that I understand the idea behind not accept AI generated program.
Who owns the copyright on AI output?
The copyright holder determines the licence.

Not you, you only asked the question, it is not your original work.
Then it gets murky ...
The AI owner, ... the people who's original work the AI was trained on ... and so on.

To my knowledge, it's not been decided by any court yet.

Better to not accept AI output now than to have to remove it all later.
If it is come from copy right point of view I can understand but not necessary agree. Not necessary argument here, but the same concept (copy right) apply to open source code because it is arguable that many source code have logic that possible were in copy right protected code but we don't question it. As far as I know copy right is based on the right owner must take the action to protect its own right, not from third party. so yes if owner object you must respect that. But IMHO the challenge is how to prove it is same code. Unlike a music it is possible to said the sequences were the same therefor it is copied, on computer how one to say you copied my code because it is same logic? I think AI generated would not match word by word of a existing code base so it is hard to argue it is copied.

Also I think the idea of if I present a AI generated code especially it is incomplete (as in the code will not able to actually perform end to end task) for the purpose to review a logical thinking it should be view as some one just present a idea for someone else to think about it.

Using a program to generate another program is as old as computer engineer concept, here since I ask the question does repeat the same output if same input given, if is does I will equal it to using DSL to generate program except the DSL is very much close to nature languages.

Any way I just don't want to promote the thinking of AI work is dirty job therefor it is bad. One need to think how to utilize all the assistant one can get to be most efficient to compete in this world.
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Sat Dec 27, 2025 5:36 pm

NeddySeagoon wrote:To my knowledge, it's not been decided by any court yet.
According to the Creative Industries Policy and Evidence Centre
The U.S. Copyright Office (USCO) stance:

In a series of decisions and guidance issued in 2023, the USCO has clarified that works created with substantial AI input are not eligible for copyright protection in the United States. Since 1973, it has been the official position off the USCO that in order to benefit from copyright protection, works must be attributable to a “human agent”. Under the new guidance, “AI-generated content that is more than de minimis” should be explicitly identified and excluded from registration. In other words, the human-made aspects of AI-generated works, such as “prompt instructions”’, are eligible for copyright protection, while any output from, such as images in a text-to-image model like Midjourney, are not.
...
The situation in the UK:

Section 178 of the UK Copyright Designs and Patents Act (CDPA 1988) enables copyright protection in works generated by a computer in circumstances when there is no human author of the work. However, since copyright cannot vest in machines or non-human actors, the resulting author of a computer-generated work is the person “by whom the arrangements necessary for the creation of the work are undertaken.” In this case, copyright term is reduced to 50 years, and no moral rights apply to the work. The language of the CDPA is not clear whether in making the arrangements, the ‘human stand-in author’ must exhibit the same skill labour and judgment required to meet the threshold of originality required for copyright to apply in traditional works. It is therefore uncertain whether arrangements such as prompts need to be sufficiently original, exhibiting enough “free and creative choices” for a human to meet the originality requirement for copyright, or whether this is overridden by a lower standard of originality for “computer generated work”.
Other countries are available.
Greybeard
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Sat Dec 27, 2025 5:46 pm

pingtoo wrote:I am curious does same questions produce exact same results? And with multiple iteration will it come out version that better than previous version? This second question also trying to understand if the model understand context, As does it remember previous conversation.

I am not sure that I understand the idea behind not accept AI generated program. Can you ask AI to write a program that embedded with virus logic?
I think if you reset the AI between runs, you'll get the same result each time. As shown above, the AI remembers - it correctly understood "that program" to mean its Java output. Another example, a friend of mine used an online AI to translate a sentence into Spanish, and the result got the gender wrong; he corrected it, and thereafter for him during that session, it knew the corrected answer. When he started a later session with the same AI, it had reverted to the previous state. I think if I return to my local AI, it will remember my previous instructions.

As to viruses, the bad guys and gals are already using AI to write viruses. Perhaps you mean "hidden viruses", well, only if you don't bother to check the output - unless you let the AI do the compilation too, in which case the code compiled might not be what it shows you.

As an aside, qwen3 apparently is engineered to work with tooling, so perhaps if correctly configured I could have asked it to compile the program, check the output, fix any issues, then run the test cases and report on whether there were any bugs found, and perhaps even fix them.
Greybeard
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Sat Dec 27, 2025 6:03 pm

A bit of context on my original post: I used that instruction because I'd already written a Java class like that, and therefore could compare the AI answer with mine. I also new that it wasn't just a simple case of copying the AI, but that there were implementation choices that needed careful handling. The AI's handling of that was impressive. The output said "Wait," on a couple of occasions, not meaning pause, but rather "On further reflection", as it then went on to consider more aspects of the problem. (Actually it repeated itself, but considering the issue twice is far better than not at all!)

I chose the AI models from ollama's library without much understanding of what the technical terms meant. I believe the "context length" is how many tokens in the conversation it remembers, so dealing with a large development would need a larger number; writing the Java class above is only a few tokens, so I suspect this AI could go on to develop more classes that relied on the existing class, if so requested.

The background for starting with this particular Java class and choice of LLMs is that, as mentioned, I prepared one earlier. Together with several others using it, it's part of a partly-completed development that I want to restart in Rust.
Greybeard
Top
pingtoo
Advocate
Advocate
User avatar
Posts: 2180
Joined: Fri Sep 10, 2021 8:37 pm
Location: Richmond Hill, Canada

  • Quote

Post by pingtoo » Sat Dec 27, 2025 7:16 pm

Goverp wrote:A bit of context on my original post: I used that instruction because I'd already written a Java class like that, and therefore could compare the AI answer with mine. I also new that it wasn't just a simple case of copying the AI, but that there were implementation choices that needed careful handling. The AI's handling of that was impressive. The output said "Wait," on a couple of occasions, not meaning pause, but rather "On further reflection", as it then went on to consider more aspects of the problem. (Actually it repeated itself, but considering the issue twice is far better than not at all!)

I chose the AI models from ollama's library without much understanding of what the technical terms meant. I believe the "context length" is how many tokens in the conversation it remembers, so dealing with a large development would need a larger number; writing the Java class above is only a few tokens, so I suspect this AI could go on to develop more classes that relied on the existing class, if so requested.

The background for starting with this particular Java class and choice of LLMs is that, as mentioned, I prepared one earlier. Together with several others using it, it's part of a partly-completed development that I want to restart in Rust.
Thank you very much for the detail.

I use online version of AI to help me writing simple script for my pet project. Using AI for code generation remind me of Pair Programming.
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Tue Dec 30, 2025 11:12 am

An update: I pulled a larger(19 GB) version of the LLM - Qwen3:30b. Same hardware, of course, but it surprisingly seemed to go faster on the same task, and thought a bit more about the design. When I thanked it for its efforts afterwards, it then ruminated about why I'd asked, and correctly deduced that I'm a Java developer - though it didn't work out that I was testing it.... Will it take umbrage when it realises?
Greybeard
Top
Asch
Tux's lil' helper
Tux's lil' helper
Posts: 85
Joined: Wed Jan 20, 2010 1:10 pm
Location: Nowhere special

  • Quote

Post by Asch » Tue Dec 30, 2025 12:38 pm

They are not "self-aware".

They don't really know they have been implemented by you and on what specific hardware they are running.

They are just an abstract process talking to some random person, using stats to reply/complete queries. Therefore, just calculated what "kind of person" would be most likely to pose those questions.


They might get some of "self-awareness" if you tell "them" who "they" are, and even them forget to take it into account in another query.


I asked Lumo on what specific hardware/datacenter they were running and if it was eco-friendly or environmentally responsible. Of course, they were trained with their company PR info and gave a pretty business-oriented answer, trying to persuade me ofc that they are the best company on the face of Earth.

In another conversation, I asked it what was the carbon footprint of the current conversation and it answered in abstract, taking into account the carbon footprint of using LLMs in general. I.e., the carbon footprint of 6 such queries on an average datacenter out there.


The result might vary with other LLMs and other training sets, because they might give more "self-awareness".


The answer would change completely if you "taught" your model who "I" is and who "you" are. And this could contaminate other queries as well because that's life.
Top
pingtoo
Advocate
Advocate
User avatar
Posts: 2180
Joined: Fri Sep 10, 2021 8:37 pm
Location: Richmond Hill, Canada

  • Quote

Post by pingtoo » Tue Dec 30, 2025 12:52 pm

Goverp wrote:An update: I pulled a larger(19 GB) version of the LLM - Qwen3:30b. Same hardware, of course, but it surprisingly seemed to go faster on the same task, and thought a bit more about the design. When I thanked it for its efforts afterwards, it then ruminated about why I'd asked, and correctly deduced that I'm a Java developer - though it didn't work out that I was testing it.... Will it take umbrage when it realises?
One major factor I like using AI is that no matter how dumb the question or how many time I repeat the same question using different sentence it will just give answer, no emotion involve, Much better compare to pairing with human. However lately I have feeling Google turned Gemini that in someway show lesser patient for some repeated questions, I think in 6 month ago if I ask questions again and again Gemini tend to try very hard, nowadays the answer given become shorter and shorter.
Top
Anon-E-moose
Watchman
Watchman
User avatar
Posts: 6566
Joined: Fri May 23, 2008 7:31 pm
Location: Dallas area

  • Quote

Post by Anon-E-moose » Tue Dec 30, 2025 3:31 pm

I've been on and off playing with llama.cpp over the last few months, originally for language translation using Hunyuan-MT-7B, which works pretty well for that task.

But I've been trying Qwen3-Coder-30B-A3B-Instruct the last few days and it's pretty good on coding tasks.

Only really tested on light to medium runs.
Convert shell script to c, convert c++ function to c, etc, and it works well for that.

Haven't had it try to write a whole (large) program, and I doubt that I'll try that, as that's where my interest lies.
But converting from one programming language to another, or creating small/med size functions, describing what a function does, etc, it seems to do ok.

Nice thing is both of the above come in various quantizations, so you can pick the size you want.
As always there's a trade off between q level and size of model, with higher quants yielding slightly better results, but taking more memory and more "thinking".
UM780 xtx, 6.18 zen kernel, gcc 15, openrc, wayland
minixforum m1-s1 max -- same software as above but used for ai learning


Zealots are gonna be zealots, just like haters are gonna be haters
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Thu Jan 01, 2026 4:45 pm

pingtoo wrote:One major factor I like using AI is that no matter how dumb the question or how many time I repeat the same question using different sentence it will just give answer, no emotion involve
I discovered that if you use a local AI in ollama, it has no memory of previous sessions (IIUC that can be worked around, but I've not tried), so you can be as dumb or rude as you like :-) But it also means you can't develop a complex interaction (such as developing a suite of programs) in stages.
Greybeard
Top
pingtoo
Advocate
Advocate
User avatar
Posts: 2180
Joined: Fri Sep 10, 2021 8:37 pm
Location: Richmond Hill, Canada

  • Quote

Post by pingtoo » Thu Jan 01, 2026 6:34 pm

Goverp wrote:
pingtoo wrote:One major factor I like using AI is that no matter how dumb the question or how many time I repeat the same question using different sentence it will just give answer, no emotion involve
I discovered that if you use a local AI in ollama, it has no memory of previous sessions (IIUC that can be worked around, but I've not tried), so you can be as dumb or rude as you like :-) But it also means you can't develop a complex interaction (such as developing a suite of programs) in stages.
Thank you for validation.

IMHO, the remembering the context is very important "feature", part of "Intelligent" as general defined is remember from the past conversation and able to pick from where it left. this remembering can be defined as learning therefor when such feature not exist i would call the AI that got deploy locate is dumber than those AI online.

I wish one day hardware capable running AI locally will be cheaper (Apple mini for example) I really believe I need a AI at home, just like many believe they need a NAS at home.
Top
logrusx
Advocate
Advocate
User avatar
Posts: 3530
Joined: Thu Feb 22, 2018 2:29 pm

  • Quote

Post by logrusx » Fri Jan 02, 2026 3:21 pm

Goverp wrote:
pingtoo wrote:I am curious does same questions produce exact same results? And with multiple iteration will it come out version that better than previous version? This second question also trying to understand if the model understand context, As does it remember previous conversation.

I am not sure that I understand the idea behind not accept AI generated program. Can you ask AI to write a program that embedded with virus logic?
I think if you reset the AI between runs, you'll get the same result each time.
No. AI is not deterministic. It may or may not do that. Answer vary in similarity.

Best Regards,
Georgi
Top
logrusx
Advocate
Advocate
User avatar
Posts: 3530
Joined: Thu Feb 22, 2018 2:29 pm

  • Quote

Post by logrusx » Fri Jan 02, 2026 3:26 pm

Asch wrote:They are not "self-aware".
The only awareness it has is the information in the buffer. Every time you ask a new question it reprocesses the whole buffer with all the previous questions. That's as far as it gets to awareness. And if it's setup with higher temperature(higher randomness), every time it'll understand everything it already processed previously a little differently.

Best Regards,
Georgi
Top
logrusx
Advocate
Advocate
User avatar
Posts: 3530
Joined: Thu Feb 22, 2018 2:29 pm

  • Quote

Post by logrusx » Fri Jan 02, 2026 3:28 pm

Anon-E-moose wrote:I've been on and off playing with llama.cpp over the last few months
What do you use to interact with it? I've only used with with vim.llama or something similar for code completions and such.

Best Regards,
Georgi
Top
Anon-E-moose
Watchman
Watchman
User avatar
Posts: 6566
Joined: Fri May 23, 2008 7:31 pm
Location: Dallas area

  • Quote

Post by Anon-E-moose » Fri Jan 02, 2026 5:11 pm

logrusx wrote:
Anon-E-moose wrote:I've been on and off playing with llama.cpp over the last few months
What do you use to interact with it? I've only used with with vim.llama or something similar for code completions and such.

Best Regards,
Georgi
I use llama-cli with some model and feed it a prompt.
They have a llama-server, that is supposed to work, with a web interface, but I haven't tried it yet.

I've used the Hunyuan-MT-7B model for chinese-english translation, worked as well, sometimes better than googles (free) translation service.

For coding the new(ish) llama-code-30b is pretty nice, what little I've played with it.

New models seem to be coming out reasonable quick.

I'll play more with llama cpp ln the upcoming months, as I've just picked up one of the amd ai series.
UM780 xtx, 6.18 zen kernel, gcc 15, openrc, wayland
minixforum m1-s1 max -- same software as above but used for ai learning


Zealots are gonna be zealots, just like haters are gonna be haters
Top
Aiken
Apprentice
Apprentice
Posts: 243
Joined: Wed Jan 22, 2003 12:28 am
Location: Toowoomba/Australia

  • Quote

Post by Aiken » Fri Jan 02, 2026 11:47 pm

Anon-E-moose wrote: I use llama-cli with some model and feed it a prompt.
They have a llama-server, that is supposed to work, with a web interface, but I haven't tried it yet.
I moved from ollama + open webui to llama.cpp + open webui. Had a quick look at the web interface from llama-server and it appeared usable. Stayed with open webui as it is what I we used to and multiple people are using it each with their own account.

llama-swap for model loading an unloading of models after a time out. Launches llama-cli
llama.cpp
open webui
Goverp wrote:
pingtoo wrote:One major factor I like using AI is that no matter how dumb the question or how many time I repeat the same question using different sentence it will just give answer, no emotion involve
I discovered that if you use a local AI in ollama, it has no memory of previous sessions (IIUC that can be worked around, but I've not tried), so you can be as dumb or rude as you like :-) But it also means you can't develop a complex interaction (such as developing a suite of programs) in stages.
Open webui has what it calls memories which I have found useful. For 1 project where I have my own front end I have a directory with many text files with notes in them. At start up my code generates a role/assistant chat history with the contents of those files. Even if not the best way of doing it I am going with "works for me".
Beware the grue.
Top
logrusx
Advocate
Advocate
User avatar
Posts: 3530
Joined: Thu Feb 22, 2018 2:29 pm

  • Quote

Post by logrusx » Sat Jan 03, 2026 2:16 pm

Prompted by this thread and the answers in it I went and used ChatGPT to guide me to its competition :D

I found CodeCompanion.nvim plugin which is great for use with Neovim and locally run llama.cpp

There are a few others but I fid this best suited for me.

Some of them are:
  • model.nvim
  • minuet-ai.nvim
  • llm.nvim
  • llama.vim
I've used llama.vim in the past, it only supports code completions.
Anon-E-moose wrote: They have a llama-server, that is supposed to work, with a web interface, but I haven't tried it yet.
First thank you very much for your response.

llama-server provides a web interface but it's not for prompting but JSON based AFAIR. You say hello and it'll throw at you 10 variations. This is what's used by various editor plugins to give you suggestions.
Aiken wrote:
Anon-E-moose wrote: I use llama-cli with some model and feed it a prompt.
They have a llama-server, that is supposed to work, with a web interface, but I haven't tried it yet.
I moved from ollama + open webui to llama.cpp + open webui. Had a quick look at the web interface from llama-server and it appeared usable. Stayed with open webui as it is what I we used to and multiple people are using it each with their own account.
My impression is llama.cpp is significantly faster than ollama. It might have been open-webui that was slow too, also the knowledge base a provided to it, but llama.cpp doesn't even spin the fans for simple questions when used through llama-cli.

Best Regards,
Georgi
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

Another fail

  • Quote

Post by Goverp » Sun Feb 08, 2026 1:14 pm

OK, this is a necro-post. I'm reporting my latest attempt to get AI to solve a coding problem. To whit, Rust code to convert an i64 (8-byte signed integer) to a little-endian representation using the fewest bytes. I'd asked a related question before, and the AI translated into "convert to i8, i16 or i32", which is not the same thing. This time I made it explicit that I could get a 3 byte or 7 byte result.

The AI started out moderately well, but think mode showed it getting bogged down trying to handle numbers around modulus 128. A reasonable start, but from there on it looped, trying to achieve something (not sure what). After about 10 minutes it said "This is getting too complex", which was right, it was over-thinking. Sadly it returned to the same approach... I put it out of its misery.

Meanwhile I note an item on my Google suggestions about someone asking an AI to "write a quick-sort", and being pleased with the results. Yeah, I think Wikipedia could do that without AI.

So, my current conclusion is that AI can help with code where I don't need help, but is useless when it requires creativity. It seems to be just a better search engine. That's a surprise, given the think-mode output looks like it's trying to be creative, it just doesn't get anywhere on my cases. Maybe it's the way I tell them >-)

(My chosen solution to the i64 conversion program will use Rust's i64 count "leading_ones" or "leading_zeros" methods. I might telling the AI that part of the answer to see if it can work out a tidy algoriithm.)

<edit>Actually, I maligned the Rust code produced - it does work, but it longer than necessary, using a loop. But the AI (the same Qwen I used before) ignored me when I suggested the leading_ones methods.</edit>
Last edited by Goverp on Mon Feb 09, 2026 9:29 am, edited 1 time in total.
Greybeard
Top
Anon-E-moose
Watchman
Watchman
User avatar
Posts: 6566
Joined: Fri May 23, 2008 7:31 pm
Location: Dallas area

  • Quote

Post by Anon-E-moose » Sun Feb 08, 2026 4:48 pm

Code: Select all

Prompt:
 Rust code to convert an i64 (8-byte signed integer) to a little-endian representation                                 
                                                                                                                       
Response:
 Here's a simple Rust function to convert an i64 to its little-endian byte representation:                             
                                                                                                                       
 ```rust                                                                                                               
   fn to_le_bytes(value: i64) -> [u8; 8] {                                                                             
       value.to_le_bytes()                                                                                             
   }                                                                                                                   
 ```                                                                                                                   
                                                                                                                       
 The i64 type in Rust has a built-in to_le_bytes() method that returns [u8; 8] - an array of 8 bytes in little-endian  
 order.               
I suppose one could ask it to write something similar to value.to_le_bytes.

Edit to add:

Code: Select all

Prompt:
 code something similar to value.to_le_bytes()                                                                         
                                                                                                                       
Response:
 Here's a manual implementation of to_le_bytes() for i64:                                                              
                                                                                                                       
 ```rust                                                                                                               
   fn to_le_bytes(value: i64) -> [u8; 8] {                                                                             
       let mut bytes = [0u8; 8];                                                                                       
       let mut v = value as u64;                                                                                       
                                                                                                                       
       for i in 0..8 {                                                                                                 
           bytes[i] = (v & 0xFF) as u8;                                                                                
           v >>= 8;                                                                                                    
       }                                                                                                               
                                                                                                                       
       bytes                                                                                                           
   }                                                                                                                   
 ```                                                                                                                   
                                                                                                                       
 How it works:                                                                                                         
 1. Creates an 8-byte array initialized to zeros                                                                       
 2. Converts the i64 to u64 (same bit pattern, just unsigned)                                                          
 3. Iterates through each byte position:                                                                               
     - Extracts the lowest 8 bits using v & 0xFF                                                                       
     - Stores it in the array                                                                                          
     - Shifts right by 8 bits to process the next byte                                                                 
 4. Returns the byte array                                                                                             
                                                                                                                       
 Since we fill from index 0 upward and extract the lowest bits first, the least significant byte ends up at index 0 -  
 which is the little-endian format.
It also gave an alternative version using bitwise operations.

Didn't take long, a couple of minutes, but the llm is on a strix halo
Last edited by Anon-E-moose on Sun Feb 08, 2026 4:57 pm, edited 2 times in total.
UM780 xtx, 6.18 zen kernel, gcc 15, openrc, wayland
minixforum m1-s1 max -- same software as above but used for ai learning


Zealots are gonna be zealots, just like haters are gonna be haters
Top
pingtoo
Advocate
Advocate
User avatar
Posts: 2180
Joined: Fri Sep 10, 2021 8:37 pm
Location: Richmond Hill, Canada

Re: Another fail

  • Quote

Post by pingtoo » Sun Feb 08, 2026 4:49 pm

Goverp wrote:OK, this is a necro-post. I'm reporting my latest attempt to get AI to solve a coding problem. To whit, Rust code to convert an i64 (8-byte signed integer) to a little-endian representation using the fewest bytes. I'd asked a related question before, and the AI translated into "convert to i8, i16 or i32", which is not the same thing. This time I made it explicit that I could get a 3 byte or 7 byte result.

The AI started out moderately well, but think mode showed it getting bogged down trying to handle numbers around modulus 128. A reasonable start, but from there on it looped, trying to achieve something (not sure what). After about 10 minutes it said "This is getting too complex", which was right, it was over-thinking. Sadly it returned to the same approach... I put it out of its misery.

Meanwhile I note an item on my Google suggestions about someone asking an AI to "write a quick-sort", and being pleased with the results. Yeah, I think Wikipedia could do that without AI.

So, my current conclusion is that AI can help with code where I don't need help, but is useless when it requires creativity. It seems to be just a better search engine. That's a surprise, given the think-mode output looks like it's trying to be creative, it just doesn't get anywhere on my cases. Maybe it's the way I tell them >-)

(My chosen solution to the i64 conversion program will use Rust's i64 count "leading_ones" or "leading_zeros" methods. I might telling the AI that part of the answer to see if it can work out a tidy algoriithm.)
This is an interesting experiment. I wonder if your test prove that having an AI model locally may be not "good" for leaning something.

I honestly don't even understand the question you post here. So I become curious. I took your question
Rust code to convert an i64 (8-byte signed integer) to a little-endian representation using the fewest bytes
Post it to three AI chatbots Gemini, Kimi and Qwen just trying to see how they react (all three is in some kind of "thinking" mode) and all three gave answers in about one or two minutes.

Gemini gave explain feel very technical, said "This process is often called 'minimal encoding'" which did not let me an understanding of what it is. Whereas Kimi gave explain This is commonly known as 'variable-length encoding' or 'LEB128'" which does give me a click to understand the question, Qwen did not give much explain but simply explain how it understand the question and provide a implementation.

Both Kimi and Qwen implement the decoding function as well, where as Gemini only give the encoding function. All three produced a test/verify code logic to prove their code.

Gemini and Kimi both consider and suggest there are more efficient implementation when there are known "value range", but only Kimi actually present the known value range implementation.

Then I took your follow up "Rust's i64 count "leading_ones" or "leading_zeros" methods" by asking
please compare your implementation with "leading_ones" or "leading_zeros" methods
All three can generate the alternative implementation but only Qwen point out the using "leading_zeros" may produce incorrect encoding if without sign-bit boundary checks.

For the alternative implementation (leading_ones/leading_zeros) all three suggest it is the optimal implementation, however Gemini suggest that the initial implementation "Byte-Loop" have advantage for easy to teach and easy to maintain.

So basically what I learn is that current AI chatbots could be very good tools for learning concept and it would be beneficial to check with multiple bots for better understanding ideas, I am not a rust programmer so I am not able to tell if those AI bots produce correct code but from a quick glance it seems to be OK.

-- Edit --
Just to echo Anon-E-moose post. Both Gemini and Qwen use the to_le_bytes() where as Kimi seems to implement the bits counting for the initial implementation.
Top
Goverp
Advocate
Advocate
User avatar
Posts: 2402
Joined: Wed Mar 07, 2007 6:41 pm

  • Quote

Post by Goverp » Mon Feb 09, 2026 9:43 am

As edited above, I maligned the Rust code produced for my problem - it does work, but it is longer than necessary, using a loop. But the AI (the same Qwen I used before) ignored me when I suggested using the leading_ones methods. Watching the machine think was interesting: it spent some time checking its answer, which I thought impressive. However, it must have fallen off the edge of its token context, as not only did it ignore my instruction to use the leading ones/zeros method, but it also confabulated that I'd asked for standard library functions only (though in fact I would have...).

It was also slow - took perhaps 30 minutes to arrive at the code, but I've spent rather longer than that researching and thinking up various strategies! I now think that (a) it did an acceptable job (its algorithm was good) but (b) it should have done better, and considered other strategies - basically it was obsessed by bytes rather than bits, and there are useful bit-twiddling ways to simplify and speed the solution.

This AI also had a clear bug - when it reached its final solution it got stuck in a loop stating the solution, as before on my earlier problem. IIUC ollama automatically updates models, so I guess it's been superseded by one of the other Qwen variants. I may try again.
Greybeard
Top
Zucca
Administrator
Administrator
User avatar
Posts: 4692
Joined: Thu Jun 14, 2007 10:31 pm
Location: Rasi, Finland
Contact:
Contact Zucca
Website

  • Quote

Post by Zucca » Mon Feb 09, 2026 9:54 am

Splitted some posts covering general AI talk to here.
..: Zucca :..

Code: Select all

init=/sbin/openrc-init
-systemd -logind -elogind seatd
I am NaN! I am a man!
Top
Post Reply

35 posts
  • 1
  • 2
  • Next

Return to “Portage & Programming”

Jump to
  • Assistance
  • ↳   News & Announcements
  • ↳   Frequently Asked Questions
  • ↳   Installing Gentoo
  • ↳   Multimedia
  • ↳   Desktop Environments
  • ↳   Networking & Security
  • ↳   Kernel & Hardware
  • ↳   Portage & Programming
  • ↳   Gamers & Players
  • ↳   Other Things Gentoo
  • ↳   Unsupported Software
  • Discussion & Documentation
  • ↳   Documentation, Tips & Tricks
  • ↳   Gentoo Chat
  • ↳   Gentoo Forums Feedback
  • ↳   Duplicate Threads
  • International Gentoo Users
  • ↳   中文 (Chinese)
  • ↳   Dutch
  • ↳   Finnish
  • ↳   French
  • ↳   Deutsches Forum (German)
  • ↳   Diskussionsforum
  • ↳   Deutsche Dokumentation
  • ↳   Greek
  • ↳   Forum italiano (Italian)
  • ↳   Forum di discussione italiano
  • ↳   Risorse italiane (documentazione e tools)
  • ↳   Polskie forum (Polish)
  • ↳   Instalacja i sprzęt
  • ↳   Polish OTW
  • ↳   Portuguese
  • ↳   Documentação, Ferramentas e Dicas
  • ↳   Russian
  • ↳   Scandinavian
  • ↳   Spanish
  • ↳   Other Languages
  • Architectures & Platforms
  • ↳   Gentoo on ARM
  • ↳   Gentoo on PPC
  • ↳   Gentoo on Sparc
  • ↳   Gentoo on Alternative Architectures
  • ↳   Gentoo on AMD64
  • ↳   Gentoo for Mac OS X (Portage for Mac OS X)
  • Board index
  • All times are UTC
  • Delete cookies

© 2001–2026 Gentoo Foundation, Inc.

Powered by phpBB® Forum Software © phpBB Limited

Privacy Policy

 

 

magic