How does a programming language work?

We’ll have to do a little history. Sorry for the long answer.

In the beginning, there were the first computers

Monsters of several tens of tons, occupying several rooms. Programming these machines meant taking them apart and putting them back together again by changing the internal wiring.

The use of vacuum tubes

(the “lamps”) allowed to limit this rewiring (it was always necessary to rewire, but not the whole machine), and the appearance, then the miniaturization, of transistors, allowed, them, to reduce the size.

As the machines evolved, the need to rewire them completely disappeared.

But computers still had to be driven in binaries. To put it simply, the central computing unit (what we call the processor today) has a set of predefined instructions. Each instruction has a value expressed by a binary digit. Some instructions are followed by data, which are also represented by a binary digit. The program, in its most raw state, is the succession of these instructions.

We can program using Assembler

To put it simply, the assembler is a human readable representation of the processor instruction set. Each family of processors has its own assembler. In a nutshell: to each instruction corresponds a simple code word. Assemblers are therefore proto programming languages. It is a bit more readable than binary code, but not very practical to use. The level of abstraction is at its lowest, which does not facilitate algorithmic design.

The computer scientists of the time (one in particular, Grace Hopper

Although she was not the only one, she was a pioneer in this field) had the idea of writing their programs in a form of simplified English. This simplified English would have a vocabulary, syntax and grammar that would be programmatically identifiable, and which could be translated into assembly (and thus into binary code). It would therefore be called a programming language.

When one uses a programming language, one writes in a language that is easily understood by the programmer. This is called the “source code”. In this form, the program is incomprehensible to the processor, and cannot be executed as is. The programming language being programmatically understandable, the source code can however be understood by another program, the compiler (I will not dwell on the subtleties between compiled and interpreted languages). This compiler provides the translation from the source code to the binary code that is executed.

To summarize roughly, the programming language is a simplified language that allows to express algorithms, with more or less abstraction from the physical realities of the machine on which the program will be executed. The grammar of this language is programmatically recognizable, so that a program written in this language is readable by another program, the compiler. The compiler translates the program written in the programming language into binary code, which can be executed.

A programming language is just a language like any other, except that this time it is to talk to a machine.

Programming languages all have the same goal: to make a machine with a binary processor do something.

PHP, Java, C++, etc… The syntaxes change, the paradigms sometimes too, but in the end, it will be calculated by 0 and 1.

When we write code, we write “instructions” that the machine will execute.

All current machines (in 99.99999% of the cases, I don’t take into account Quantum Computers), have a processor (also called micro-processor) that will contain billions of transistors.

A transistor is an electronic element capable of doing basic operations (AND, NOR, OR, etc…), namely binary.

In the end, everything comes together, even if there is a great diversity of programming languages.

A programming language can be both very simple and very complicated. You can actually create your own!

I saw other interesting answers, I leave them the in-depth explanations.