You can get to the point where you can write hundreds of lines of code that's almost always bug free. Definitely as you mature as a programmer, you can write more and more lines of code with fewer and fewer bugs in them.
But totally bug free?
An object lesson here is this widely used 17 line subroutine that contains a subtle bug - which remained undetected for 20 years.
Is your coding so good that you wouldn't make a coding error like that yourself? Or do you know of anyone who can code so accurately as that?
Also - as you get better at coding - yes you can write maybe a hundred lines of code with no bugs (usually) - but if you are able to do that - that means you can write code far faster - because it's the bugs that slow the process down most of all. So you end up writing thousands of lines of codes regularly - and those then have bugs in them. If you can write a thousand lines of code bug free - then you end up writing tens of thousands of lines of code - with bugs in it.
Can anyone write ten thousand lines of code that is virtually bug free, no significant bugs in it?
(I remember reading somewhere - I think about ten years ago - at any rate quite a long time ago - about a programmer who did this, who wrote thousands of lines of intricate code and never had to fix any bugs in his code - remarkable - but can't seem to find it now. Anyone know who it is and what the background of the story is? I might have mis-remembered, will see if I can find it.)
Then - for your program to be completely bug free - not just your code - but also the compiler that compiles it to binary - and the operating system that runs it - have to be bug free also.
In the case of the operating systems most people use such as Windows, Mac, Linux and the compilers most programmers use for these operating systems (such as gcc, or Microsoft Visual Studio) - they are widely known to be buggy with frequent bug fixes.
Here for instance are some of the recent (last year) bug fixes for gcc, the compiler which is used to build Linux and Mac programs as binaries from the source code.
Now - in practice, it is rare indeed that you'll come across a compiler bug. But - that they happen at all means you can't really be sure that your code is totally bug free even if it works fine in all your tests.
After a new operating system "settles in" after a few years, and if the developers respond to bug reports and fix them - then it can be reasonably bug free simply because it's had thousands of testers originally and millions of people using it after release.
However, in critical situations, you can write bug free code. It's just a lot more work.
To do that you need to use formal verification of programs.
Of course you could have bugs in the verification procedure also - but - the problem of coding bugs isn't really on the mathematical level of ideas. Mathematicians are able to write bug free mathematical proofs that run to many pages and thousands of lines.
You do get errors in mathematical proofs - but they are rare, far far rarer than bugs in programming code. And usually get discovered pretty quickly - often by the mathematician who wrote the proof in the first place. If you have enough independent mathematicians look over the proof, then you can be pretty confident - unless it's hugely complicated - that it is correct and "bug free".
I'm not sure what it is about programming that makes it so much harder to do this. But - that's how the formal verification works - by pushing the problem to a more mathematical level - where humans find it far easier to see that it is bug free.
There are some formally verified operating systems, and there is a formally verified c compiler which can handle most of the C programming language.
I wonder if some day we will find a way to make programming as easy to do - and yet as reliable - as mathematical proof?
Perhaps some kind of "artificial intelligence" - which of course would need to be formally verified itself - that takes your ideas and automatically puts them into bug free code?
So that instead of specifying everything that the program has to do line by line - you can use a broader brush approach?
If mathematicians had to write out all their proofs using first order logic and within a formal system such as ZF - they would surely also have many errors in their proofs. It's because we can look at those proofs at a higher level that we can be confident that our proofs are valid.
So - we need some way to do programming in a similar way - to be able to write out a program at a far higher level of abstraction like a mathematical proof.
But if we ever get the ability to do that in the future - we are far away from that at present.
So - we are faced with - either writing buggy code with buggy compilers on buggy operating systems - and just rely on thorough testing to find as many bugs as we can - that's the situation for almost all programmers today.
- or in mission critical situations - to write formally verified code that takes far longer to write using formally verified compilers with formally verified operating systems. Which is such hard work that it's only used in a limited way at present in situations where e.g. lives depend on the code working etc.