Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
What are the Greenest or Least Environmentally Friendly Programming Languages?
Technology has revolutionized the way we live, work, and play. It has also had a profound impact on the world of programming languages. In recent years, there has been a growing trend towards green, energy-efficient languages such as C and C++. C++ and Rust are two of the most popular languages in this category. Both are designed to be more efficient than traditional languages like Java and JavaScript. And both have been shown to be highly effective at reducing greenhouse gas emissions. So if you’re looking for a language that’s good for the environment, these two are definitely worth considering.
The study below runs 10 benchmark problems in 28 languages [1]. It measures the runtime, memory usage, and energy consumption of each language. The abstract of the paper is shown below.
“This paper presents a study of the runtime, memory usage and energy consumption of twenty seven well-known software languages. We monitor the performance of such languages using ten different programming problems, expressed in each of the languages. Our results show interesting findings, such as, slower/faster languages consuming less/more energy, and how memory usage influences energy consumption. We show how to use our results to provide software engineers support to decide which language to use when energy efficiency is a concern”. [2]
According to the “paper,” in this study, they monitored the performance of these languages using different programming problems for which they used different algorithms compiled by the “Computer Language Benchmarks Game” project, dedicated to implementing algorithms in different languages.
The team used Intel’s Running Average Power Limit (RAPL) tool to measure power consumption, which can provide very accurate power consumption estimates.
The research shows that several factors influence energy consumption, as expected. The speed at which they are executed in the energy consumption is usually decisive, but not always the one that runs the fastest is the one that consumes the least energy as other factors enter into the power consumption equation besides speed, as the memory usage.
Energy
From this table, it is worth noting that C, C++and Java are among the languages that consume the least energy. On the other hand, JavaScript consumes almost twice as much as Java and four times what C consumes. As an interpreted language, Python needs more time to execute and is, therefore, one of the least “green” languages, occupying the position of those that consume the most energy.
The results are similar to the energy expenditure; the faster a programming language is, the less energy it expends.
Memory
In terms of memory consumption, we see how Java has become one of the most memory-consuming languages along with JavaScript.
To conclude:
Most Environmentally Friendly Languages: C, Rust, and C++ Least Environmentally Friendly Languages: Ruby, Python, Perl
Although this study may seem curious and without much practical application, it may help design better and more efficient programming languages. Also, we can use this new parameter in our equation when choosing a programing language.
This parameter can no longer be ignored in the future or almost the present; besides, the fastest languages are generally also the most environmentally friendly.
If you’re interested in something that is both green and energy efficient, you might want to consider the Groeningen Programming Language (GPL). Developed by a team of researchers at the University of Groningen in the Netherlands, GPL is a relatively new language that is based on the C and C++ programming languages. Python and Rust are also used in its development. GPL is designed to be used for developing energy efficient applications. Its syntax is similar to other popular programming languages, so it should be relatively easy for experienced programmers to learn. And since it’s open source, you can download and use it for free. So why not give GPL a try? It just might be the perfect language for your next project.
Top 10 Caveats – Counter arguments:
#1 C++ will perform better than Python to solve some simple algorithmic problems. C++ is a fairly bare-bone language with a medium level of abstraction, while Python is a high-level languages that relies on many external components, some of which have actually been written in C++. And of course C++ will be efficient than C# to solve some basic problem. But let’s see what happens if you build a complete web application back-end in C++.
#2: This isn’t much useful. I can imagine that the fastest (performance-wise) programming languages are greenest, and vice versa. However, running time is not only the factor here. An engineer may spend 5 minutes writing a Python script that does the job pretty well, and spends hours on debugging C++ code that does the same thing. And the performance difference on the final code may not differ much!
#3: Has anyone actually taken a look at the winning C and Rust solutions? Most of them are hand-written assembly code masked as SSE intrinsic. That is the kind of code that only a handful of people are able to maintain, not to mention come up with. On the other hand, the Python solutions are pure Python code without a trace of accelerated (read: written in Fortran, C, C++, and/or Rust) libraries like NumPy used in all sane Python projects.
#4: I used C++ years ago and now use Python, for saving energy consumption, I turn off my laptop when I got off work, I don’t use extra monitors, my AC is always set to 28 Celsius degree, I plan to change my car to electrical one, and I use Python.
#5: I disagree. We should consider the energy saved by the products created in those languages. For example, a C# – based Microsoft Teams allows people to work remotely. How much CO2 do we save that way? 😉
#6 Also, some Python programs, such as anything using NumPy, spend a considerable fraction of their cycles outside the Python interpreter in a C or C++ library..
I would love to see a scatterplot of execution time vs. energy usage as well. Given that modern CPUs can turbo and then go to a low-power state, a modest increase of energy usage during execution can pay dividends in letting the processor go to sleep quicker.
An application that vectorized heavily may end up having very high peak power and moderately higher energy usage that’s repaid by going to sleep much sooner. In the cell phone application processor business, we called that “race to sleep.” By Joe Zbiciak
#7 By Tim Mensch : It’s almost complete garbage.
If you look at the TypeScript numbers, they are more than 5x worse than JavaScript.
This has to mean they were running the TypeScript compiler every time they ran their benchmark. That’s not how TypeScript works. TypeScript should be identical to JavaScript. It is JavaScript once it’s running, after all.
Given that glaring mistake, the rest of their numbers are suspect.
I suspect Python and Ruby really are pretty bad given better written benchmarks I’ve seen, but given their testing issues, not as bad as they imply. Python at least has a “compile” phase as well, so if they were running a benchmark repeatedly, they were measuring the startup energy usage along with the actual energy usage, which may have swamped the benchmark itself.
PHP similarly has a compile step, but PHP may actually run that compile step every time a script is run. So of all of the benchmarks, it might be the closest.
I do wonder if they also compiled the C and C++ code as part of the benchmarks as well. C++ should be as optimized or more so than C, and as such should use the same or less power, unless you’re counting the compile phase. And if they’re also measuring the compile phase, then they are being intentionally deceptive. Or stupid. But I’ll go with deceptive to be polite. (You usually compile a program in C or C++ once and then you can run it millions or billions of times—or more. The energy cost of compiling is miniscule compared to the run time cost of almost any program.)
I’ve read that 80% of all studies are garbage. This is one of those garbage studies.
This is nonsense as it runs low-level benchmarks that benchmark basic algorithms in high-level languages. You don’t do that for anything more than theoretical work.
Do a comparison of real-world tasks and you should find less of a spread.
Do a comparison of web-server work or something like that – I guess you may find a factor of maybe 5 or 10 – if it’s done right.
Don’t do low-level algorithms in a high-level language for anything more than teaching. If you need such an algorithm – the way to do it is to implement it in a library as a native module. And then it’s compiled to machine code and runs as fast as any other implementation.
#9 By Tim Mensch
It’s worse than nonsense. TypeScript complies directly to JavaScript, but gets a crazy worse rating somehow?!
For NumPy and machine learning applications, most of the calculations are going to be in C.
The world I’ve found myself in is server code, though. Servers that run 24/7/365.
And in that case, a server written in C or C++ will be able to saturate its network interface at a much lower continuous CPU load than a Python or Ruby server can. So in that respect, the latter languages’ performance issues really do make a difference in ongoing energy usage.
But as you point out, in mobile there could be an even greater difference due to the CPU being put to sleep or into a low power mode if it finishes its work more quickly.
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Programming, Coding and Algorithms Questions and Answers.
Coding is a complex process that requires precision and attention to detail. While there are many resources available to help learn programming, it is important to avoid making some common mistakes. One mistake is assuming that programming is easy and does not require any prior knowledge or experience. This can lead to frustration and discouragement when coding errors occur. Another mistake is trying to learn too much at once. Coding is a vast field with many different languages and concepts. It is important to focus on one area at a time and slowly build up skills. Finally, another mistake is not practicing regularly. Coding is like any other skill- it takes practice and repetition to improve. By avoiding these mistakes, students will be well on their way to becoming proficient programmers.
In addition to avoiding these mistakes, there are certain things that every programmer should do in order to be successful. One of the most important things is to read coding books. Coding books provide a comprehensive overview of different languages and concepts, and they can be an invaluable resource when starting out. Another important thing for programmers to do is never stop learning. Coding is an ever-changing field, and it is important to keep up with new trends and technologies.
Coding is a process of transforming computer instructions into a form a computer can understand. Programs are written in a particular language which provides a structure for the programmer and uses specific instructions to control the sequence of operations that the computer carries out. The programming code is written in and read from a text editor, which in turn is used to produce a software program, application, script, or system.
When you’re starting to learn programming, it’s important to have the right tools and resources at your disposal. Coding can be difficult, but with the proper guidance it can also be rewarding.
This blog is an aggregate of clever questions and answers about Programming, Coding, and Algorithms. This is a safe place for programmers who are interested in optimizing their code, learning to code for the first time, or just want to be surrounded by the coding environment.
I think, the most common mistakes I witnessed or made myself when learning is:
1: Trying to memorize every language construction. Do not rely on your memory, use stack overflow.
2: Spend a lot of time solving an issue yourself, before you google it. Just about every issue you can stumble upon, is in 99.99% cases already has been solved by someone else. Learn to properly search for solutions first.
3: Spending a couple of days on a task and realizing it was not worth it. If the time you spend on a single problem is more than halve an hour then you probably doing it wrong, search for alternatives.
4: Writing code from a scratch. Do not reinvent a bicycle, if you need to write a blog, just search a demo application in a language and a framework you chose, and build your logic on top of it. Need some other feature? Search another demo incorporating this feature, and use its code.
In programming you need to be smart, prioritize your time wisely. Diving in a deep loopholes will not earn you good money.
Congratulations, you have implicitly defined an interface and a function that requires its parameter to fulfil that interface (implicitly).
How do you know any of this? Oh, no problem, just try using the function, and if it fails during runtime with complaints about your bar missing a foo method, you will know what you did wrong. By Paulina Jonušaitė
List of Freely available programming books – What is the single most influential book every Programmers should read
What is the best and easy programming language to learn in 2022?
Best != easy and easy != best. Interpreted BASIC is easy, but not great for programming anything more complex than tic-tac-toe. C++, C#, and Java are very widely used, but none of them are what I would call easy.
Is Python an exception? It’s a fine scripting language if performance isn’t too critical. It’s a fine wrapper language for libraries coded in something performant like C++. Python’s basics are pretty easy, but it is not easy to write large or performant programs in Python.
Like most things, there is no shortcut to mastery. You have to accept that if you want to do anything interesting in programming, you’re going to have to master a serious, not-easy programming language. Maybe two or three. Source.
Why do modern compilers even require us to declare data types? Can’t it figure out what we are doing and put that stuff in for us? Like how JavaScript does.
Type declarations mainly aren’t for the compiler — indeed, types can be inferred and/or dynamic so you don’t have to specify them.
They’re there for you. They help make code readable. They’re a form of active, compiler-verified documentation.
For example, look at this method/function/procedure declaration:
locate(tr, s) { … }
What type is tr?
What type is s?
What type, if any, does it return?
Does it always accept and return the same types, or can they change depending on values of tr, s, or system state?
If you’re working on a small project — which most JavaScript projects are — that’s not a problem. You can look at the code and figure it out, or establish some discipline to maintain documentation.
If you’re working on a big project, with dozens of subprojects and developers and hundreds of thousands of lines of code, it’s a big problem. Documentation discipline will get forgotten, missed, inconsistent or ignored, and before long the code will be unreadable and simple changes will take enormous, frustrating effort.
But if the compiler obligates some or all type declarations, then you say this:
Node locate(NodeTree tr, CustomerName s) { … }
Now you know immediately what type it returns and the types of the parameters, you know they can’t change (except perhaps to substitutable subtypes); you can’t forget, miss, ignore or be inconsistent with them; and the compiler will guarantee you’ve got the right types.
That makes programming — particularly in big projects — much easier. Source: Dave Voorhis
What is a programming language that you hope never to work in again, and why?
COBOL. Verbose like no other, excess structure, unproductive, obtuse, limited, rigid.
JavaScript. Insane semantics, weak typing, silent failure. Thankfully, one can use transpilers for more rationally designed languages to target it (TypeScript, ReScript, js_of_ocaml, PureScript, Elm.)
ActionScript. Macromedia Flash’s take on ECMA 262 (i.e., ~JavaScript) back in the day. It’s static typing was gradual so the compiler wasn’t big on type error-catching. This one’s thankfully deader than Disco.
BASIC. Mandatory line numbering. Zero standardization. Not even a structured language — you’ve never seen that much spaghetti code.
In the real of dynamically typed languages, anything that is not in the Lisp family. To me, Lisps just are a more elegant and richer-featured than the rest. Alexander feterman
Why does game programming fit so well with Object Oriented Programming paradigm?
Object-oriented programming is “a programming model that organizes software design around data, or objects, rather than functions and logic.”
Most games are made of “objects” like enemies, weapons, power-ups etc. Most games map very well to this paradigm. All the objects are in charge of maintaining their own state, stats and other data. This makes it incredibly easier for a programmer to develop and extend video games based on this paradigm.
I could go on, but I’d need an easel and charts. Chrish Nash
What are the concepts every Java programmer must know?
Ok…I think this is one of the most important questions to answer. According to the my personal experience as a Programmer, I would say you must learn following 5 universal core concepts of programming to become a successful Java programmer.
(1) Mastering the fundamentals of Java programming Language – This is the most important skill that you must learn to become successful java programmer. You must master the fundamentals of the language, specially the areas like OOP, Collections, Generics, Concurrency, I/O, Stings, Exception handling, Inner Classes and JVM architecture.
(2) Data Structures and Algorithms – Programming languages are basically just a tool to solve problems. Problems generally has data to process on to make some decisions and we have to build a procedure to solve that specific problem domain. In any real life complexity of the problem domain and the data we have to handle would be very large. That’s why it is essential to knowing basic data structures like Arrays, Linked Lists, Stacks, Queues, Trees, Heap, Dictionaries ,Hash Tables and Graphs and also basic algorithms like Searching, Sorting, Hashing, Graph algorithms, Greedy algorithms and Dynamic Programming.
(3) Design Patterns – Design patterns are general reusable solution to a commonly occurring problem within a given context in software design and they are absolutely crucial as hard core Java Programmer. If you don’t use design patterns you will write much more code, it will be buggy and hard to understand and refactor, not to mention untestable and they are really great way for communicating your intent very quickly with other programmers.
(4) Programming Best Practices – Programming is not only about learning and writing code. Code readability is a universal subject in the world of computer programming. It helps standardize products and help reduce future maintenance cost. Best practices helps you, as a programmer to think differently and improves problem solving attitude within you. A simple program can be written in many ways if given to multiple developers. Thus the need to best practices come into picture and every programmer must aware about these things.
(5) Testing and Debugging (T&D) – As you know about the writing the code for specific problem domain, you have to learn how to test that code snippet and debug it when it is needed. Some programmers skip their unit testing or other testing methodology part and leave it to QA guys. That will lead to delivering 80% bugs hiding in your code to the QA team and reduce the productivity and risking and pushing your project boundaries to failure. When a miss behavior or bug occurred within your code when the testing phase. It is essential to know about the debugging techniques to identify that bug and its root cause.
I hope these instructions will help you to become a successful Java Programmer. Here i am explain only the universal core concepts that you must learn as successful programmer. I am not mentioning any technologies that Java programmer must know such as Spring, Hibernate, Micro-Servicers and Build tools, because that can be change according to the problem domain or environment that you are currently working on…..Happy Coding!
You’ll also possibly never use them. Or use them very infrequently.
If you mention that on here, some will say you are a lesser developer. They will insist that the line between good and not good developers is algorithm knowledge.
That’s a shame, really.
In commercial work, you never start a day thinking ‘I will use algorithm X today’.
The work demands the solution. Not the other way around.
This is yet another proof that a lot of technical sounding stuff is actual all about people. Their investment in something. Need for validation. Preference.
The more you know in development, the better. But I would not prioritize algorithms right at the top, based on my experience. Alan Mellor
What are the disadvantages of using C++ to make a programming language rather than C, and are there any at all?
So you’re inventing a new programming language and considering whether to write either a compiler or an interpreter for your new language in C or C++?
The only significant disadvantage of C++ is that in the hands of bad programmers, they can create significantly more chaos in C++ than they can in C.
But for experienced C++ programmers, the language is immensely more powerful than C and writing clear, understandable code in C++ can be a LOT easier.
INCIDENTALLY:
If you’re going to actually do this – then I strongly recommend looking at a pair of tools called “flex” and “bison” (which are OpenSourced versions of the more ancient “lex” and “yacc”). These tools are “compiler-compilers” that are given a high level description of the syntax of your language – and automatically generate C code (which you can access from C++ without problems) to do the painful part of generating a lexical analyzer and a syntax parser. Steve Baker
How do you make something private but accessible within a class in C++?
Did you know you can google this answer yourself? Search for “c++ private keyword” and follow the link to access specifiers, which goes into great detail and has lots of examples. In case google is down, here’s a brief explanation of access specifiers:
The private access specifier in a class or struct definition makes declarations that occur after the specifier. A private declaration is visible only inside the class/struct, and not in derived classes or structs, and not from outside.
The protected access specifier makes declarations visible in the current class/struct and also in derived classes and structs, but not visible from outside. protected is not used very often and some wise people consider it a code smell.
The public access specifier makes declarations visible everywhere.
You can also use access specifiers to control all the items in a base class. By Kurt Guntheroth
What are the shortcomings of the Rust Programming language?
Rust programmers do mention the obvious shortcomings of the language.
Such as that a lot of data structures can’t be written without unsafe due to pointer complications.
Or that they haven’t agreed what it means to call unsafe code (although this is somewhat of a solved problem, just like calling into assembler from C0 in the sysbook).
The main problem of the language is that it doesn’t absolve the programmers from doing good engineering.
It just catches a lot of the human errors that can happen despite such engineering. Jonas Oberhauser.
Will Rust beat C++ in performance and the speed of execution?
Comparing cross-language performance of real applications is tricky. We usually don’t have the resources for writing said applications twice. We usually don’t have the same expertise in multiple languages. Etc. So, instead, we resort to smaller benchmarks. Occasionally, we’re able to rewrite a smallish critical component in the other language to compare real-world performance, and that gives a pretty good insight. Compiler writers often also have good insights into the optimization challenges for the language they work on.
My best guess is that C++ will continue to have a small edge in optimizability over Rust in the long term. That’s because Rust aims at a level of memory safety that constrains some of its optimizations, whereas C++ is not bound to such considerations. So I expect that very carefully written C++ might be slightly faster than equivalent very carefully written Rust.
However, that’s perhaps not a useful observation. Tiny differences in performance often don’t matter: The overall programming model is of greater importance. Since both languages are pretty close in terms of achievable performance, it’s going to be interesting watching which is preferable for real-life engineering purposes: The safe-but-tightly-constrained model of Rust or the more-risky-but-flexible model of C++. By David VandeVoorde
Why do a lot of programmers shy away from learning lisp?
Lisp does not expose the underlying architecture of the processor, so it can’t replace my use of C and assembly.
Lisp does not have significant statistical or visualization capabilities, so it can’t replace my use of R.
Lisp was not built with unix filesystems in mind, so it’s not a great choice to replace my use of bash.
Lisp has nothing at all to do with mathematical typesetting, so won’t be replacing LATEXLATEX anytime soon.
And since I use vim, I don’t even have the excuse of learning lisp so as to modify emacs while it’s running.
In fewer words: for the tasks I get paid to do, lisp doesn’t perform better than the languages I currently use. By Barry RoundTree
What are some things that only someone who has been programming 20-50 years would know?
The truth of the matter gained through the multiple decades of (my) practice (at various companies) is ugly, not convenient and is not what you want to hear.
The technical job interviews are non indicative and non predictive waste of time, that is, to put it bluntly, garbage (a Navy Seal can be as brave is (s)he wants to be during the training, but only when the said Seal meets the bad guys face to face on the front line does her/his true mettle can be revealed).
An average project in an average company, both averaged the globe over, is staffed with mostly random, technically inadequate, people who should not be doing what they are doing.
Such random people have no proper training in mathematics and computer science.
As a result, all the code generated by these folks out there is flimsy, low quality, hugely not efficient, non scalable, non maintainable, hardly readable steaming pile of spaghetti mess – the absence of structure, order, discipline and understanding in one’s mind is reflected at the keyboard time 100 percent.
It is a major hail mary, a hallelujah and a standing ovation to the genius of Alan Turing for being able to create a (Turing) Machine that, on the one hand, can take this infinite abuse and, on the other hand, being nothing short of a miracle, still produce binaries that just work. Or so they say.
There is one and only one definition of a computer programmer: that of a person who combines all of the following skills and abilities:
the ability to write a few lines of properly functioning (C) code in the matter of minutes
the ability to write a few hundred lines of properly functioning (C) code in the matter of a small number of hours
the ability to write a few thousand lines of properly functioning (C) code in the matter of a small number of weeks
the ability to write a small number of tens of thousands of lines of properly functioning (C) code in the matter of several months
the ability to write several hundred thousand lines of properly functioning (C) code in the matter of a small number of years
the ability to translate a given set of requirements into source code that is partitioned into a (large) collection of (small and sharp) libraries and executables that work well together and that can withstand a steady-state non stop usage for at least 50 years
It is this ability to sustain the above multi-year effort during which the intellectual cohesion of the output remains consistent and invariant is what separates the random amateurs, of which there is a majority, from the professionals, of which there is a minority in the industry.
There is one and only one definition of the above properly functioning code: that of a code that has a check mark in each and every cell of the following matrix:
the code is algorithmically correct
the code is easy to read, comprehend, follow and predict
the code is easy to debug
the intellectual effort to debug code, symbolized as E(d)E(d), is strictly larger than the intellectual effort to write code, symbolized as E(w)E(w). That is: E(d)>E(w)E(d)>E(w). Thus, it is entirely possible to write a unit of code that even you, the author, can not debug
the code is easy to test
in different environments
the code is efficient
meaning that it scales well performance-wise when the size of the input grows without bound in both configuration and data
the code is easy to maintain
the addition of new and the removal or the modification of the existing features should not take five metric tons of blood, three years and a small army of people to implement and regression test
the certainty of and the confidence in the proper behavior of the system thus modified should by high
(read more about the technical aspects of code modification in the small body of my work titled “Practical Design Patterns in C” featured in my profile)
(my claim: writing proper code in general is an optimization exercise from the theory of graphs)
the code is easy to upgrade in production
lifting the Empire State Building in its entirety 10 feet in the thin blue air and sliding a bunch of two-by-fours underneath it temporarily, all the while keeping all of its electrical wires and the gas pipes intact, allowing the dwellers to go in and out of the building and operating its elevators, should all be possible
changing the engine and the tires on an 18-wheeler truck hauling down a highway at 80 miles per hour should be possible
A project staffed with nothing but technically capable people can still fail – the team cohesion and the psychological compatibility of team members is king. This is raw and unbridled physics – a team, or a whole, is more than the sum of its members, or parts.
All software project deadlines without exception are random and meaningless guesses that have no connection to reality.
Intelligence does not scale – a million fools chained to a million keyboards will never amount to one proverbial Einstein. Source
Is there a way to initialize an object without a constructor? Can you still create objects?
At a technical syntax level, this depends on the language. Many modern languages either create a default constructor, or will automatically initialize object fields to default values. There are other ways to initialize fields in some languages – maybe reflection, maybe a static method, maybe relaxed access control. Maybe (ugh, I feel sick) a whole bunch of setters.
But at the human level, why? Why engage in something unclear to the next programmer?
One nice thing about a constructor is that it tells me that you thought about how your object should be created. You considered what was needed to make it safe to use. By Alan Mellor
Is it bad if I write a function that only gets called once?
A function pulls a computation out of your program and puts it in a conceptual box labeled by the function’s name. This lets you use the function name in a computation instead of writing out the computation done by the function.
Writing a function is like defining an obscure word before you use it in prose. It puts the definition in one place and marks it out saying, “This is the definition of xxx”, and then you can use the one word in the text instead of writing out the definition.
Even if you only use a word once in prose, it’s a good idea to write out the definition if you think that makes the prose clearer.
Even if you only use a function once, it’s a good idea to write out the function definition if you think it will make the code clearer to use a function name instead of a big block of code. Source.
Can conditional statements be effectively removed by the use of polymorphism when using object-oriented programming?
Conditional statements of the form if this instance is type T then do X can generally — and usually should — be removed by appropriate use of polymorphism.
All conditional statements might conceivably be replaced in that fashion, but the added complexity would almost certainly negate its value. It’s best reserved for where the relevant types already exist.
Creating new types solely to avoid conditionals sometimes makes sense (e.g. maybe create distinct nullable vs not-nullable types to avoid if-null/if-not-null checks) but usually doesn’t. Source.
Can you explain exception handling in Java so clearly that I’ll never get it wrong ever again?
Something bad happens as your Java code runs.
Throw an exception.
The following lines after the throw do not run, saving them from the bad thing.
control is handed back up the call stack until Java runtime finds a catch() statement that matches the exception.
The code resumes running from there. Source: Allan Mellor
Why is the YouTube algorithm so much better at finding similar music compared to Spotify and other music providers?
Google has better programmers, and they’ve been working on the problem space longer than either Spotify or the other providers have existed.
YouTube has a year and a half on Spotify, for example, and they’ve been employing a lot of “organ bank” engineers from Google proper, for various problems — like the “similar to this one“ problem — and the engineers doing the work are working on much larger teams, overall.
Spotify is resource starved, because they really aren’t raking in the same ratio of money that YouTube does. By Terry Lambert
Is coding Java in Notepad++ and compiling with command prompt good for learning Java?
Over the past two decades, Java has moved from a fairly simple ecosystem, with the relatively straightforward ANT build tool, to a sophisticated ecosystem with Maven or gradle basically required. As a result, this kind of approach doesn’t really work well anymore. I highly recommend that you download the community edition of IntelliJ IDEA; this is a free version of a great commercial IDE. By Joshua Gross
How do you handle a JSON response in Java?
Best bet is to turn it into a record type as a pure data structure. Then you can start to work on that data. You might do that direct, or use it to construct some OOP objects with application specific behaviours on them. Up to you.
You can decide how far to take layering as well. Small apps work ok with the data struct in the exact same format as the JSON data passed around. But you might want to isolate that and use a mapping to some central domain model. Then if the JSON schema changes, your domain model won’t.
Libraries such as Jackson and Gson can handle the conversion. Many frameworks have something like it built in, so you get delivered a pure data struct ‘object’ containing all the data that was in the JSON
Things like JSON Validator and JSV Schemas can help you validate the response JSON if need be. By Alan Mellor
What is the tech stack behind Slack?
Keith Adams already gave an excellent overview of Slack’s technology stack so I will do my best to add to his answer.
Products that make up Slack’s tech stack include: Amazon (CloudFront, CloudSearch, EMR, Route 53, Web Services), Android Studio, Apache (HTTP Server, Kafka, Solr, Spark, Web Server), Babel, Brandfolder, Bugsnag, Burp Suite, Casper Suite, Chef, DigiCert, Electron, Fastly, Git, HackerOne, JavaScript, Jenkins, MySQL, Node.js, Objective-C, OneLogin, PagerDuty, PHP, Redis, Smarty, Socket, Xcode, and Zeplin.
Additionally, here’s a list of other software products that Slack is using internally:
Marketing: AdRoll, Convertro, MailChimp, SendGrid
Sales and Support: Cnflx, Front, Typeform, Zendesk
Analytics: Google Analytics, Mixpanel, Optimizely, Presto
Slack is used by 55% of Unicorns (and 59% of B2B Unicorns)
Slack has 85% market share in Siftery’s Instant Messaging category on Siftery
Slack is used by 42% of both Y Combinator and 500 Startups companies
35% of companies in the Sharing Economy use Slack
(Disclaimer: The above data was pulled from Siftery and has been verified by individuals working at Slack) By Gerry Giacoman Colyer
When should programmers use recursion?
Programmers should use recursion when it is the cleanest way to define a process. Then, WHEN AND IF IT MATTERS, they should refine the recursion and transform it into a tail recursion or a loop. When it doesn’t matter, leave it alone. Jamie Lawson
Why is multithreading so underused?
Mostly because:
Multithreading is not applicable for most problems (see reason #3).
For a substantial subset of the problems that multithreading is applicable for, the rewards for using it are not significant enough to be worth the extra development effort.
For a subset of the remaining use cases, using multithreading requires rethinking how you solve the problem, in order to break it up into separate chunks that can be processed by different threads and the results then recombined.
Besides extra development effort in this sense, this also adds extra overhead to the solution, overhead which may outweigh the benefits of using multithreading.
Add to all of the above, multithreading gives the programmer a lot of rope with which they can easily hang themselves, so they tend to approach it with caution. Or they don’t, and end up hanging themselves.
Finally there is a small but important set of problems — including, for example, machine learning and big data — for which multithreading could be useful but is probably superseded by multiprocessing and cloud architectures.
This requires the same sort of redesign work that I mentioned in #3 above, but it happens at a higher logical and system level than multithreading. Instead of multiple threads, running inside the same process and talking to each other, you end up with multiple processes, quite likely running on different server instances (docker usually, sometimes virtual servers), possibly on different server hardware, talking to each other via network.
Multithreading is generally useful for two sorts of problems:
Problems that are easily chunked up and farmed out to multiple threads or processes, then the results returned and combined. Also called “highly parallelizable”.
Of course a lot of 3D rendering is highly parallelizable… and almost all computers have specialized GPU hardware for doing that much faster than any CPU can.
Problems of which some part is strongly I/O bound, the most common example of which is user interfaces, which spend most of their time waiting on human reaction speeds.
And in fact, multithreading is used a lot in user interfaces, and web servers, which have to contend with the same issue. By Stevens J. Owens
When is it (if ever) a good idea to use JavaScript instead of TypeScript?
TypeScript is helpful when you have a large codebase which is going to be updated many times by many collaborators. When you are not in that use case, the advantages of TypeScript are much less obvious, besides it is possible to be too orthodox with TypeScript and prevent behaviors which are acceptable. It’s also possible (easy, even) to feel that your TypeScript implementation prevents behaviors which it actually allows. So let’s all agree that TypeScript is no silver bullet.
So TypeScript doesn’t always makes things better, but sometimes it makes them worse. There are situations when transpiring TS to JS is just not an option. Also, transpiring with types will always make the resulting JS file larger, when sometimes you have to specifically optimize to have the smallest code file. Jerome Cukier
Is Node.js better than Golang in the perspective of development speed (e.g., you write less code)?
If you use JavaScript source code with Node then yes! You will probably write shorter lines.
Go has pesky things like type information in it. It has interfaces and error returns, all needless clutter that just gets in the way of that programmer brain dump. You even have to type := instead of = to assign variables!
It’s almost like the makers of Go just wanted you to type more stuff in for the same program. Maybe they had reasons, eh, who knows? They’d probably say there were benefits to doing so.
But yes your keystrokes will be fewer with JavaScript. By Alan Mellor
Why is the C programming language not used for smartphones and other hardware devices instead of Java?
Your phone runs a version of Linux, which is programmed in C. Only the top layer is programmed in java, because performance usually isn’t very important in that layer.
Your web browser is programmed in C++ or Rust. There is no java anywhere. Java wasn’t secure enough for browser code (but somehow C++ was? Go figure.)
Your Windows PC is programmed mostly in C++. Windows is very old code, that is partially C. There was an attempt to recode the top layer in C#, but performance was not good enough, and it all had to be recoded in C++. Linux PCs are coded in C.
Your intuition that most things are programmed in java is mistaken. Kurt Guntheroth
How do you declare an array globally in Java?
That’s not possible in Java, or at least the language steers you away from attempting that.
Global variables have significant disadvantages in terms of maintainability, so the language itself has no way of making something truly global.
The nearest approach would be to abuse some language features like so:
public class Globals {
public static int[] stuff = new int [10];
}
Then you can use this anywhere with
Globals.stuff[0] = 42;
Java isn’t Python, C nor JavaScript. It’s reasonably opinionated about using Object Oriented Programming, which the above snippets are not examples of.
This also uses a raw array, which is a fixed size in Java. Again, not very useful, we prefer ArrayList for most purposes, which can grow.
I’d recommend the above approach if and only if you have no alternatives, are not really wanting to learn Java and just need a dirty utility hack, or are starting out in programming just finding your feet. Alan Mellor
In which situations is NoSQL better than relational databases such as SQL? What are specific examples of apps where switching to NoSQL yielded considerable advantages?
Warning: The below answer is a bit oversimplified, for pedagogical purposes. Picking a storage solution for your application is a very complex issue, and every case will be different – this is only meant to give an overview of the main reason why people go NoSQL.
There are several possible reasons that companies go NoSQL, but the most common scenario is probably when one database server is no longer enough to handle your load. noSQL solutions are much more suited to distribute load over shitloads of database servers.
This is because relational databases traditionally deal with load balancing by replication. That means that you have multiple slave databases that watches a master database for changes and replicate them to themselves. Reads are made from the slaves, and writes are made to the master. This works to a certain level, but it has the annoying side-effect that the slaves will always lag slightly behind, so there is a delay between the time of writing and the time that the object is available for reading, which is complex and error-prone to handle in your application. Also, the single master eventually becomes a bottleneck no matter how powerful it is. Plus, it’s a single point of failure.
NoSQL generally deals with this problem by sharding. Overly simplified it means that users with userid 1-1000000 is on server A, and users with userid 1000001-2000000 is on server B and so on. This solves the problems that relational replication has, but the drawback is that features such as aggregate queries (SUM, AVG etc) and traditional transactions are sacrificed.
Chrome is coded in C++, assembler and Python. How could three different languages be used to obtain only one product? What is the method used to merge programming languages to create software?
Concretely, a processor can correctly receive only one kind of instruction, the assembler. This may also depend on the type of processor.
As the assembler requires several operations just to make a simple addition, we had to create compilers which, starting from a higher level language (easier to write), are able to automatically generate the assembly code.
These compilers can sometimes receive several languages. For example the GCC compiler allows to compile C and C++, and it also supports to receive pieces of assembler inside, defined by a keyword __asm__ . The assembler is still something to avoid absolutely because it is completely dependent on the machine and can therefore be a source of interference and unpleasant surprises.
More generally, we also often create multi-language applications using several components (libraries, or DLLs, activeX, etc.) The interfaces between these components are managed by the operating systems and allow Java to coexist happily. , C, C++, C#, Python, and everything you could wish for. A certain finesse is however necessary in the transitions between languages because each one has its implicit rules which must therefore be enforced very explicitly.
For example, an object coming from the C++ world, transferred by these interfaces in a Java program will have to be explicitly destroyed, the java garbage collector only supports its own objects.
Another practical interface is web services, each module, whatever its technology, can communicate with the others by sending itself serialized objects in json… which is much less a source of errors! Source: Vincent Steyer
What is the most dangerous code you have ever seen?
This line removes the filesystem (starting from root /)
(a chance in 6 of falling on the first part described above, otherwise “click” is displayed)
How difficult is LeetCode, How is it used in a practical way?
Practically, it is used for two purposes:
Practicing coding-in-the-small, like a daily crossword puzzle for programmers
Pre-screens for certain interview processes
Certain interview processes ask LeetCode style questions as a technical test. Not all do. Possibly not even most. Source
Which type of software developer should learn first, C, Python, or JavaScript?
If you plan to be a professional general software engineer:
C, then Python, then JavaScript.
If you plan to be a professional Web developer:
JavaScript, then Python, then C.
If you want to learn application programming as a hobby:
Python, then JavaScript, then C.
If you want to learn embedded systems programming as a hobby:
C, then Python. Skip JavaScript.
In general, learning C first will give you a great grounding in computing and computational machinery, whilst giving you useful programming skills. It’s not the easiest journey, but if you know C well, everything else becomes easier. Source.
Are HTML and CSS still relevant in 2022?
Relevant?
They’re unavoidable if you’re a Web frontend developer and not using a frontend framework that autogenerates HTML and CSS.
If you’re a backend developer or working entirely outside of Web development (there’s actually a lot of that) then HTML and CSS are, for you, completely irrelevant. Dave Voorhis
What is a disadvantage of JavaScript?
Richard Kenneth Eng covered most of the major issues with JavaScript itself, so I won’t repeat. Instead of focusing on the weirdness inherent to the language, I want to focus on JavaScript in the ideal, and what disadvantages may lie therein. When I say in the ideal, what I mean is what disadvantages exist if we assume perfect application of the language without concern for the quirks, because even there, problems exist.
For me, the single biggest disadvantage to JavaScript is that best practices can change rapidly and without notice. This is because all JavaScript is running in an engine, be it Blink in Chrome and Node, SpiderMonkey in Firefox, Chakra in Edge, or Webkit in Safari.
Since competition among browsers is so fierce, JavaScript performance is of the utmost importance. That means that tests and performance profiles for code that were done six months ago could be obsolete. The major companies try to alleviate this confusion somewhat with docs providing insight into the engine (Chrome[1][2], Firefox[3][4], Edge, Safari[5]) and future direction of development, but there are no guarantees. Your ideal machine could suddenly, and by no action of your own, no longer be ideal.
For example, not that long ago, using an array.join() to build large strings was best practice. Today, brute-force concatenation is wildly faster. Or for a more conceptual example, tail call optimization. This is a major part of functional programming. It is part of the ES6 spec. Chrome had it available, but it has since been pulled from Firefox, Chrome, Node, and Edge. Only Safari supports it.
Contrast this with the relatively stable internal implementation of things in Python. Yes, Python can be woefully slow for some operations, but how Python will run is much better known than JavaScript. Source: Aaron Martin Colby
I see this as the key problem for JavaScript, even in an idealized form.
What should beginner programmers know about software testing?
It exists.
It takes time.
It requires culture and discipline.
Unit testing is what takes the least time.
Hours writing an automated test is time invested, not time wasted.
Once into it, you would not believe that a while ago you were not taking testing seriously enough.
Testing allows the programmer (either the one who wrote the code initially or a new programmer) to refactor the code without as much fear of breaking something.
R is an environment for developing and implementing statistics and data analysis. The newest methods are overwhelmingly written in R.
Python is a general purpose programming language. It has lots of stat capabilities, but is, AFAIK, used much, much less for the development of new methods. By Peter Flom
Do some software engineers fall into the trap of copying code from Stack Overflow that solves their problem without understanding how the code actually works?
I think that “copying code” is extremely rare among stronger developers, but it seemingly must be something that happens given the number of memes that reference it.
I’ve also seen people post that “it’s faster to copy the code than to write it.” This frankly shocks me. I can’t even imagine how a search for the exact bit of code that you need could possibly be faster than just writing the code.
I mean, there do exist some pretty hairy algorithms that would be hard to get right in one go. And if you can’t find a library, then in those rare cases starting with working code might make sense.
But I’m talking a once per year kind of exceptional experience, if that. And given that I’ve also seen people claim that all software developers really need to know is how to iterate over lists and concatenate strings, I really doubt any of the really complex algorithms are what people are picking up.
So hear’s the deal: If it’s faster for a developer to look up the code than to write it, what are the odds they will actually be able to fully understand it? They didn’t write it, so they certainly don’t understand it as well as if they did write it.
One final note: A few times recently I’ve seen a comparison between copying and pasting code and using libraries. It’s profoundly different from using a library. A library I would choose would:
Be tested for corner cases and not just demonstrate a technique
Be reviewed for security vulnerabilities
Be verified by unit and system tests
Be used in hundreds of projects, ensuring that it works in many situations
Be updated frequently when any problems or security flaws are discovered, which will trigger a warning (and actually send me an email) telling me that the library needs to be patched and why.
I think that one of the major anti-patterns common in the PHP world is to copy and paste code in preference to installing libraries. You end up with millions of copy-pasted security holes that are literally millions of times more difficult to find and fix.
So no, it’s not the same by any stretch. Copying code from StackOverflow is an anti-pattern. Looking at code to see how a library or method is intended to be used is fine. Looking for docs on a language is fine. I’m not against using the internet as a reference. Source.
No. Just Python will not be enough to land a job. You need 5 more things.
1. Companies don’t hire a Python dev. They hire a problem-solver.
If you have learned X and can’t do Y with the concepts you learned from X, you will not get hired. It’s impossible to know what problems you have to solve when you get hired or what problem you will be solving 2/3 years from now. That’s why companies look for people who can take any problem and solve it by using coding techniques.
For example, you have learned the dictionary data structure. Now, if I give you a new situation (car dealership, book club, grocery store, or bank software, etc.) and you don’t know how to use the dictionary data structure in that situation, you will not get hired.
So,
Don’t just learn coding. Pay attention to why you are doing certain things. What else you could do to solve the problem.
What should you absolutely never do when using Python? Python Do’s and Don’t
Don’t do this:
a = []
for i in range(x):
if i % 2 == 0:
a.append(i)
Rather do this:
a = [i for i in range(x) if i%2 == 0]
2. Don’t do this:
arr = [‘This’,’is’,’a’,’sentence’]
s = ”
for i in range(len(arr)-1):
s = s+arr[i]+’ ‘
s = s+arr[-1]
#rather do this:
s = (‘ ‘).join(arr)#This is a sentence
3. Don’t do this:
name = ‘Tyler’
level = 15
rank = ‘Supreme’
#instead of doing this:
print(‘Hello ‘+name+’, you are on level ‘+str(level)+’ and your rank is ‘+rank+’.’)s
rather do this:
print(‘Hello {}, you are on level {} and your rank is {}.’.format(name,level,rank))
4. Don’t do something that has been already done, I mean use libraries (if there are) instead of doing something from scratch.
5. Don’t do this:
if a > 5:
v = True
else:
v = False
Do this:
v = a > 5 #it sets v to true if a > 5 else False
This one is only restricted to Booleans
lets say you wanted it to be either yes or no instead of true or false:
v = ‘Yes’ if a > 5 else ‘No’
You can take this a step further and do this:
v = (‘No’,’Yes’)[a>5]
Here a>5 can either be false(0) or true(1), so if its true, it will return the element at index 1(‘Yes’) and return ‘No’ if a > 5 returns false(index 0).
6. Don’t do something like this:
if a == True:
b = False
if a == False:
b = True
# rather do this
b = not a
7. You can use libraries instead of doing the stuff from scratch
YOU DON’T NEED TO REINVENT THE WHEEL
There are a vast number of libraries out there
8. Always use functions instead of copy pasting the code over and over again
9. Don’t do this:
name = ‘Tyler’
level = 15
rank = ‘Supreme’
#instead of doing this:
print(‘Hello ‘+name+’, you are on level ‘+str(level)+’ and your rank is ‘+rank+’.’)s
rather do this:
print(‘Hello {}, you are on level {} and your rank is {}.’.format(name,level,rank))
Last but not least. Don’t feel embarrassed if you can’t understand something. A problem with new programmers is that they hesitate to ask something. You can ask someone online if you don’t understand something. There are websites like: Stack Overflow, Quora, Reddit and other forums. Always feel free to post your question. This is for all programming languages not just python.
2. Companies don’t hire a single skill. They hire a set of skills.
Just python is like plain coffee. It doesn’t taste good. You need to add milk, sugar, caramel to make it tasty. Similarly, don’t just learn python. Instead, you have to learn a little bit about other programming languages. You don’t have to be master at those. However, you need to know a little bit.
To build web development using python, you need to know HTML, CSS, and Javascript. Without your basic understanding in HTML, CSS, and Javascript you won’t be able to master in python frameworks like Django, Flask, etc.
You must learn a little bit about Database (SQL). How to structure a table. How to query data from a table. How to join data from two tables.
If you want to become a Machine learning developer, you need to know the basics of Mathematical modeling, how to train a model and what are the different modeling approaches.
Also, you could be just the front-end developer or the just database guy. However, you need to know how the full-stack software development works. How front-end, back-end, and database are connected.
.
3. Don’t just learn Python. Learn the overall Software Development process.
Unfortunately, most of the companies don’t want to spend time to train you about the overall software development process. That’s why you will hear companies are looking for X years of experience. To compete with that requirement…
So,
Build full-scale projects. Have at least 3 projects on your Github
Don’t just copy the project from somewhere. Instead, try to build them yourself. While developing the project, you will get stuck numerous times. Try to find out solutions online. Struggles to find out the solution will make you a better developer.
Deploy your projects on some servers. It could be Heroku or somewhere else.
Get familiar with popular Python frameworks like Numpy, Pandas, Srapy, Django, etc. Play with those. Use them in some projects
Write unit tests. Put enough comments on your code. Know how to organize code. Find out Python best practices like PEP 8 — Style Guide
Master at least one IDE. Learn keyboard shortcuts.
In most programming languages, why do I have to write “x > 30 AND x < 100” and not “30 < x < 100”?
The first expression is much easier to parse than the other one. It’s just three binary operators combined together. The second expression doesn’t work like that. After reading the first bit, you can’t just say “okay, this is a binary comparison operator”, you need to continue reading forward to determine how to proceed.
Not impossible, just extra difficult for very little gain. If you want to cover just this specific case, and not an arbitrary string of chained comparisons, you can achieve it easily with containment and range operators, like x in 30..100.
I personally avoid using bare new whenever possible. Switching to std::make_unique makes it easier to avoid subtle leak situations by guaranteeing every allocation immediately has an owner that will delete it. This is particularly true in environments that allow exceptions.
If you have a legacy codebase that you contribute to, follow its norms. Otherwise, I strongly encourage using std::unique_ptr to track ownership, and avoid bare new.
You can (and should!) use raw pointers and references to pass objects around. Use std::unique_ptr to manage ownership only. Use std::unique_ptr in interfaces that manage ownership transfer.
Use std::shared_ptr if ownership is shared among threads, or in rare cases where you need more complex lifetime management. The same caveats apply: use it to manage ownership, and to highlight ownership transfer in interfaces. Source
Is it better to write clear but slightly inefficient code or abstruse but optimized code?
One assignment was given as “write the most efficient Scheme code you can to compute the 100th Fibonacci number.” He promised that the person who wrote the most efficient code would get some prize (bonus points? I don’t remember what it actually was.)
There of course are many ways to write such a program. The naive implementation usually involves doubly recursive calls, and might look something like this:
This is a pretty clear implementation, and for many purposes (perhaps up to n=20 or so with modern computer) it might even be “fast enough.” But if you consider how it works, it basically computes by just adding up one fib(n) times. By the time n == 50, that’s 12,586,269,025 which of course was a lot of ones to add up, and took a fair amount of time (the growth of fib(n) is exponential), and this performs fib(n) adds.
It’s not hard to come up with an algorithm which exhibits linear behavior (assuming (falsely, but good enough for this argument) constant time additions). It looks something like this (again in Scheme, using tail recursion, and with a helper function):
I’d submit that this is actually a bit more tricky than the code above to understand, and takes a modicum of thought to both come up with, but it works pretty well. It isn’t hard to see that the loop procedure is called exactly n times, so it is much better behaved.
But can we do better? As it happens, yes.
This post is already long, but you can read about the method that I chose here: Fast Fibonacci algorithms under the heading of “matrix exponentiation”. You can compute the nth Fibonacci number in O(log n) operations (operations being 2×2 matrix multiplies). Numbers can be raised to large powers by using a divide and conquer algorithm. If a number is raised to an even power, we can compute it by just squaring the number raised to have that power. If the number is odd, we can just subtract one, use the above trick, and then multiply the result once more. This gives you log(n) recursive calls, each of which does (again, assuming constant speed arithmetic) constant work. Huzzah.
I coded this up. it worked rather well. I could compute fib(n) for pretty large n in fairly modest time. The code was actually fairly pretty and well documented. I did a couple of small tweaks to make it go slightly faster. The code to multiply two 2×2 matrices could be made slightly faster by taking advantage of the symmetries. A few other tweaks helped modestly, but I didn’t adopt any kind of loop unrolling or the like.
This story is getting long. Skip ahead.
After the assignments were graded, he had four of us write our solution on the board. I was selected. I was particularly proud of my solution.
A couple showed minor tweaks of the exponential or linear time codes. He pointed out particular aspects as being noteworthy, but mentioned you could do a lot better.
He then asked me to explain my code. I took a couple of minutes, and explained the fast exponentiation algorithm, and how it computed, and what I expected the time to be. He said “well done, and clearly written.”
He then proceeded to the fourth example. It was easily twice as long as mine, with basically all the matrix multiplies explicitly unrolled into a long, confusing pile of code, with iterative calls that shuffled eight or so variables around. It wasn’t at all clear from the code what was going on, and while I had some clue, I would have hated to debug it.
Will awarded the “best program to him.”
I was unconvinced. Mine was clearly easier to read, and I suspected that all this “optimization” didn’t do squat in exchange for making the code impossible to read and to maintain. I said so in class. Will patiently explained that wasn’t asking for the clearest code, but rather the fastest code. So I naturally asked “well, how much faster is it than mine?”
It was then apparent that he never had timed the programs. He had verified that the code gave the right answer, but he actually had never actually timed them. I suggested that we return to his office and do so.
To his credit, he did so. And we discovered that not only was my code clearer, it was significantly faster. As in twice as fast.
While trying to understand why that must be so, we uncovered what the issue was. In the midst of all the unrolling that he did, his code did one extra function evaluation. In essence, he computed fib(n+1) before just returning fib(n). And as it happens, that one extra evaluation cost a significant fraction of the total time to compute fib(n), because bignum arithmetic is not constant time. My code didn’t do that operation, so it was a lot faster.
In other words, in an attempt to make optimized code, he had inadvertently inserted code which wasn’t a bug exactly (the code returned the right answer) but which didn’t perform as well as code that was clearly written.
I claimed a moral victory for myself, although my recollection is that Will didn’t agree with me, and said that “well, when the bug is fixed his is faster” which was true, but again, I would submit as irrelevant.
End of the parable. I learned a couple of important lessons.
My belief in writing readable code first was justified. The choice of proper algorithm gave me virtually all of the speed savings I needed. Additional tweaking that reduced readability to get statistically insignificant gains were not justifiable.
If you aren’t timing, you aren’t optimizing. Will had a preconception about the code performance, but it didn’t match what we measured when the code was actually run. If you aren’t profiling, you are wasting your time “optimizing.” You can only optimize what can be measured, and you have to do the measurements to do optimization.
Ultimately, programs are written as much to be read by programmers as run by machines. Clarity and correctness are almost always the primary consideration, and choosing the right algorithm and approach is often far more important for performance than shuffling the deck chairs in exactly the right way. Source.
Let’s say you’re back in time and want to learn data structures and algorithms, how will you start, and why?
To allocate an array of size n in C without initialization, does it take O(1) or O(n) time?
A precise answer depends on the implementation of malloc() that your compiler/operating system uses, but to first approximation, the answer is NEITHER.
On the one hand, for most dynamic memory-management algorithms, the time required to allocate an uninitialized block of memory is independent of the size of the allocated block. On the other hand, the time required to allocate a block usually does depend on the pattern of previous allocations and deallocations.
For example:
If your memory system maintains a simple linked list of freed blocks, a single call to malloc() could require Θ(F)Θ(F) time in the worst case, where FF is the number of earlier calls to free(). Let me emphasize that my variable FF has no relationship whatsoever to your variable nn.
If your system uses buddy memory allocation, a single call to malloc() requires Θ(log(M/n))Θ(log(M/n)) time in the worst case, where MM is the total size of allocatable memory. The only relationship between MM and the block size nn is the trivial inequality n≤Mn≤M. In particular, larger blocks are allocated more quickly in the worst case than smaller blocks!
Many common memory-management schemes, like Doug Lea’s dlmalloc(), have been observed to run in a small number of instructions on average, in practice. There are also more specialized memory-management schemes like TLSF that provably support malloc() in O(1)O(1) worst-case time. In light of these algorithms, it is reasonable to assume, for purposes of crude theoretical analysis, that each call to malloc and free requires O(1)O(1) amortized time.
That crude theoretical model usually works well for programs that are CPU-bound, or that are memory-bound but primarily use static or fixed-size allocation. But if dynamic memory management is actually a significant contributor to your code’s running time, you probably need to take off the big-Oh glasses and measure the performance experimentally.
Why do they say, “ints are not integers and floats are not real”?
They say it because it’s true!
For this answer, I will assume types similar to int and float in C or C++. What I describe, though, is true for corresponding data types in many other programming languages, possibly after tweaking some details. It also applies to other integer and floating point types once you adjust the numeric ranges.
Without further ado…
Quick: how many digits of precision does a RR have? What’s the largest ZZ? Those questions don’t quite make sense do they? But they seem a lot more sensible for int and float, don’t they?
To be precise, when someone says “int are not integers and float are not real,” they’re saying:
And as I mention in the Addendum, we can tighten this further as: float⊊Z[12]∪{+0,−0,+Inf,−Inf,NaN(a)}float⊊Z[12]∪{+0,−0,+Inf,−Inf,NaN(a)}
Digging in…
Integers vs. int
For int≢Zint≢Z: The int data type in C and C++ has a limited range. Suppose you have 32-bit 2’s complement int. Those can hold integers the range [−231,231−1][−231,231−1]. You can make similar statements for int types with different sizes and representations.
Thus, int are a proper subset of integers. Every int value is an integer, but not every integer fits in an int.
Real vs. float
For float≢Rfloat≢R, consider that 0.1+0.2=0.30.1+0.2=0.3 in RR, but the same isn’t true with float.
Try it! Then hop on over to this answer for why:
Why is 0.1+0.2 not equal to 0.3 in most programming languages?
Computers implement a wide range of arithmetic schemes. In some, such as decimal floating point and rational arithmetic, 0.1 + 0.2 does equal 0.3. One computer I own uses radix-100 floating point, and for it, 0.1 + 0.2 = 0.3 as well. Now, in binary floating point arithmetic, including the ubiquitous version defined by IEEE-754 floating point standard, it is true: 0.1 + 0.2 ≠ 0.3. I explain the math in the answer below, working the example in double precision. It’s similar for single precision. Most programs these days use IEEE floating point by default. Programs can choose to implement other forms of arithmetic, including rational arithmetic and decimal floating point. I’ve written a few other answers that discuss how binary floating point arithmetic works, in case you want to read up on it.
The IEEE 754 binary32 data type (the most common representation for float these days) is closer to being a quirky subset of the rationals, QQ.
Lets ignore special values such as NaN(a)NaN(a), subnormals, +0+0, −0−0, +Inf+Inf, and −Inf−Inf for a moment. Each of the remaining values is called a normal value, and is the product of a constrained integer (the significand) and a constrained power of 2 (the exponent).
The significand is constrained to the disjoint ranges[−224+1,−223][−224+1,−223] and [223,224−1][223,224−1].
The exponent is constrained to 2E−23,−126≤E≤1272E−23,−126≤E≤127.
Because some of those exponents are negative, the resulting number is in QQ.
Subnormal values are also in QQ. Like normals, the significand had two disjoint ranges: [−223+1,−1][−223+1,−1] and [1,223−1][1,223−1]. The exponent is fixed at 2−1492−149.
The remaining special numbers +0+0, −0−0, +Inf+Inf, and −Inf−Inf don’t quite slot into QQ or RR. The infinities behave somewhat like +∞+∞ and −∞−∞, and in fact I usually write them that way to save typing. And the signed zeros +0+0 and −0−0 both behave mathematically like 0 nearly everywhere.
But really, the values±Inf±Inf stand in for all values too large to express in the type. You can have arithmetic which results in a finite value in RR or QQ, and yet is InfInf in a float.
And the values ±0±0 essentially stand in for all the values whose magnitude is too small to express in the type. We at least get to remember their signs. Their signs aren’t visible most of the time; however, +1/+0=+Inf+1/+0=+Inf and +1/−0=−Inf+1/−0=−Inf.
And then there’s NaN(a)NaN(a). These are Not-a-Numbers, and as their name suggests, they are not numbers. They come in two flavors (signaling and quiet), and can have a payload. For now I have abstracted those details away in an abstract argument aa. Because they aren’t numbers, they aren’t in RR or any other set of numbers.
So, float values are not a proper subset of any other numeric category, because some float values aren’t numbers, and some of the numeric values have special properties. At the very least, we can say normals and subnormals are a proper subset of QQ. Beyond that, “it’s complicated.”
Addendum:
It turns out that the set of rationals with power-of-2 denominators is known as the dyadic rationals, and these form a ring denoted by Z[1/2].
The set of floating point normals and subnormals are thus a proper subset of Z[1/2].
I’m a competitive programmer and I had spent a lot of time learning algorithms and techniques that you will never use in real life programming. However, let me tell you something, currently, I’m starting to learn about android development and most of the people I know spent a LOT more time on learning concepts that only took me 1–2 days to learn.
I think the benefits of Competitive programming boils down into training your mind to think faster and to think in new ways that no other programmer is capable of. It’s like when you are an ex-footballer, you can easily enter the domain of basketball if you want to, because you already have the muscular mass and the agility needed to perform these kind of sports and the only thing you need to focus on is what are the rules of basketball, how to use you hands instead of you legs, how a achieve some goals … etc. Thus, competitive programming help you build a solid base of computer science knowledge that will give you great benefits in the future when you want to learn anything simpler or relatively easier.
I never regret wasting time on competitive programming and I still compete from time to time in online contests. Source: Andree Kaba
As a software engineer, do you feel the biggest advantage of unit tests is regression testing the code?
For me, the biggest advantage is getting really fast design feedback. I’m able to think about how my code should be connected to other parts of the system in ways that make it easy to split apart. I’m able to run that code in isolation, without spinning up the rest of the system.
At that point, I can be confident that the public interface to that code is making it easy to work with. I can then dive in to make sure that all the logic details work as I expect – not by reading the code, but by running the code and having an automatic pass/fail check.
Regression tests are useful and important. They are way down the list for me, though. Tests are part and parcel of how I think about code:
Why should all the unit tests be independent of each other?
Two reasons:
easier to identify the root cause of a failure when there is only one reason it can fail
easier to understand and add test cases when you don’t have to consider history
Making tests depend on the state your application is in before you run the test is a major problem. It makes the tests less repeatable. They become less clear to understand – what exactly are they testing? It’s not self-describing in the test case itself – you have to ‘know’ what state the application is in. By Alan Mellor
Do software engineers learn profiling/monitoring techniques on the job, and is it not generally covered in the computer science curriculum?
Software engineers learn most of their skills on the job. A lot of hard lessons come from bugs and outages. When there is an outage, large tech organizations require that a document must be written and reviewed, so if there was an easy fix that could have prevented that outage, or that makes it less likely, when this affects things you are responsible for, you definitely remember this. Source
What is the fastest way to read all the bytes of a 50GB file?
If you need to access the entire contents of the file in non-sequential order, repeatedly, why not get a machine with 64GB of memory and read the entire file into memory once, then keep it locked in memory and do all the actual work there? In that case, reading it is a fixed cost at program start. Does it really matter if it’s super fast?
Can the file be seen as a sequence of records that can be operated on individually? If so, it’s probably far better to split the file into 2GB shards and distribute those over multiple SSDs. Then work on those shards in parallel, e.g. using 24 cores on a single machine. Depending on the workload, you might be constrained by the combined maximum width of all buses involved, and you may get higher throughput by distributing the work across several machines.
Are you really really sure that you need to read the entire file? Again, you need to answer the question of what you actually want to do after you’ve read the file. Depending on what that is, it may turn out to be faster to build an external index offline, read that, and use it to only access the parts of the larger file you actually need.
To recap, taken literally the question has a trivial answer: Don’t read a file if you’re not going to do anything with it. If you are, though, then the actual computation you are planning to do will inform the organization of the data, including where and how to store the data (a single 50GB file may not be the best idea) and how to access it for maximum throughput. Source.
Can a machine code like a human programmer?
Not for general purpose programming – but for certain very constrained tasks, that’s a routine thing to do.
EXAMPLE OF WHERE COMPUTERS ALREADY DO THAT:
For example, a “compiler” for a human programming language like (say) C++ has the task of writing a machine-code program that does exactly what the C++ code would do. Compilers are now MUCH better at doing that than human programmers are.
So if you wanted some kind of an AI machine to write code – you’d need a way to precisely describe what you wanted it to do under all circumstances.
EXAMPLE OF WHERE IT WOULD BE VERY DIFFICULT:
For example: “Hey computer – write me a program to take a sentence and reverse the order of all of the letters in each word.”
That SOUNDS like something that an artificial intelligence would be able to write – but it turns out that it couldn’t. The difficulty is that the specification for this problem is “under-specified”.
For example, should we consider “under-specified” to be one word or two? Do you want the answer to be redun-deificeps or deificeps-rudun? This matters. Do you want the compound word “afterlife” to be reversed like “after” and “life” or as one word? What about the sentence:
“Pi is approximately 3.1415926”
Do you want the number reversed? Is it a “word”?
Do you want the capital letter on the first word of a sentence to be un-capitalized and the new first letter of the first word to be capitalized instead?
“Hello world” => “Olleh dlrow” or “olleH dlrow” ?
THIS MAY SOUND TRIVIAL BUT…
“Hey computer – write me a program to drive a car.” – ends up being a MASSIVE specificational nightmare – details of how it should obey the law – and examples where disobeying the law is necessary to avoid killing a pedestrian.
Tesla’s task of building an actual, for real, self driving car by training an AI
THE POINT BEING:
In order to have an AI translate your requirement into a program that will actually WORK – you need to describe the problem precisely.
Computer programming languages are (in a sense) very rigid, specific, unambiguous ways to tell the computer what machine-code program you’d like it to write for you.
A lot of what human programmers do is to think about these kinds of issues…and writing the actual code isn’t all that big of a deal. Source.
What is the strangest sorting algorithm?
Invented on 4chan by some anonymous poster, I bring you sleep sort.
The algorithm basically works like this:
For every element x in an array, start a new program that:
Sleeps for x seconds
Prints out x
The clock starts on all the elements at the same time.
It works for any array that has non-negative numbers.
Not every day that you invent a sorting algorithm on an online forum.
A self replicating piece of code which can be written incredibly simply in almost all programming languages and will grind most machines to a halt in no time at all due to the nature of exponential increase.
Here it is in basic C. All the program does is create another program, over and over again until all resources are extinguished – usually by simply filling up the Operating System’s process table.
#include <unistd.h>
int main(void)
{
while(1) fork();
}
As noted in the Wikipedia example, careful use of ulimits for non root users on *nix machines can help protect against this.
Another example of mostly unintentionally dangerous code is the humble Off-by-one errorwhich is probably one of the most common causes of security flaws in modern software. This is where programmers pay insufficient attention to the extent of memory they have allocated, or don’t guard its limits correctly and someone is able to (accidentally or deliberately) inject bytes where they should not be, with unpredictable errors resulting, or crashes, or potentially full exploits of the host machine.
As a software engineer, what is the weirdest feature you were ever asked to create?
I was working as a game programmer and the score for our game was displayed in the corner, but lacked delimiters so it was hard to read. That is, it displayed the score like this:
1000000
Instead of this:
1,000,000
It irritated me, so I wrote a function to fix it. I wasn’t feeling very creative that day, so I called it SlapCommasInThisHereString(). I figured I’d change it later. I checked it in and moved on.
When the Lead Programmer saw it, he flipped. “Hey, who put in this function?” Sheepishly, I fessed up.
“That’s freaking awesome, man! That’s the most literal function name I’ve ever seen! You get a gold star, Chris.” Source.
Which software’s code awes you with its sheer audacity and brilliance?
LLVM.
If you’ve never heard of it before, LLVM is an acronym for “low-level virtual machine” and it is arguably the single most significant innovation in the development of compilers to this day.
Prior to LLVM, if you wanted to write a compiler for a programming language, you first had to write a front end and then a unique backend for every architecture that you wanted to target; that means that if you wanted to target both x86 and ARM for example, then you would have to write two almost entirely different backends because x86 assembly is obviously different from ARM assembly. These days with LLVM, you just write a single front end that compiles to LLVM IR and you’re basically done; it’s pretty amazing.
And LLVM basically runs the world: Google uses it to compile their C++ code, the Rust compiler uses it, Apple uses it for Swift, Oracle is using it for their new GraalVM for Java and it even provides a JIT compiler too, which coincidentally also happens to be the same JIT compiler that Julia uses; it can even target freakin’ GPUs like it does in the case of Nvidia’s CUDA. Source.
How does Lambda calculus relate to functional programming?
It’s quite simple. Functional programming is an attempt to program using mathematical functions (ones with no side effects) rather than devices focused on state, because state is messy and easily leads to certain kinds of errors. And recursion lets one do things that are inductive (which is generally why one needs state).
So, the lambda calculus let’s one talk about recursion and what is bound to what when. Little functions that have variables that recurse and capture the evolution of state, without having one global (and thus messy and hard to reason about) state. It’s the mathematical basis for being able to do functional programming.
Functional programming is the way one writes programs using that as a basis. And, one sees the connection when one writes closures and lambda functions, which are functions that don’t have a name. However, they are one of Church’s (the inventor of the lambda calculus) inventions. So, functional programming is a practical application of the lambda calculus theory.
To see this in other words, read Ian Joyner’s answer. There is the theory (computation) and the practice (computers). They go hand in hand, but are not the same. Source
I learned data structures and algorithms but I always fail to solve a question by myself and in online assessments, the questions are getting harder and harder. What should I do?
Learning a data structure means being able to readily identify
Why it exists
What are its advantages
What are its disadvantages
Likewise, solving a problem means being able to identify all of its variants. Very often, the same problem can be worded to sound like a very different problem, but the underlying solution remains the same. If you cannot identify those variants then you haven’t really learnt the underlying data structure or algorithm that solved the problem.
The commonly used data structures and algorithms are limited and they have been thoroughly researched and documented over the years. Beyond a point, there is no scope for innovation in those areas unless for academic purposes.
If you are finding endless stream of new problems that is likely because you are unable to map them to the old problems you already studied. Source.
Why might a developer declare a member function as private? What is the reason?
To tell other developers – including themselves in the future – that this function is an unimportant detail of how the code is implemented today.
This tells others that they are free to change that implementation. They are free to change, add, modify anything that is truly private, without fear of breaking anything else. The changes will be contained and will not ripple out all over the system
Had that method been made public, that suggest it is an integral part of how the code is to be used. There may be many other pieces of code that depend on it staying the same. Or may also need changing along with any changes made to the function itself.
public vs private is of huge importance in writing readable code. Source
How is Google Maps so fast?
Oh, my, first time I heard someone complaining about something being done in a computer that is “so fast”, usually is the other way around.
Well, for images, they use pyramids, which means that the images are stored at different resolutions in sections or areas.
For this to work, the algorithm that presents those images stores them in your device, so the next time you zoom in or out they are already in memory.
For route searching, each segment of the route, which is the link or street between street corners, is a segment in a node graph and you can use a fast algorithm like Dijkstra or A* to look for the route.
Now, a “meta” thingie about the answer (basis of modern metaverses).
As with Google, the search algorithm, Google Maps “knows” in advance the most common searches and most commonly retrieved areas of the world so it can present faster those particular results.
When a million people daily look for “Eiffel Tower” while one person looks for “Buyumbura”, then the information about Paris is kept on RAM and on a fast server while the info about Burundi is kept on disk on a slower server.
Those searches and paths are themselves treated as Dijkstra searches because humans are kind of predictable so AI can quickly define your curiosity and present faster the most common one. Source
Is the printf function a version of fprintf that does not need to have the file pointer passed as the first parameter because it always writes to the file stdout?
Congratulations on finding one of the longest running C Library easter eggs!
This feature of printf(…) being the same thing as fprintf(stdout, …) is one of the few features that C imported from ocaml. This is exactly a function which is a partial evaluation of another function.
I would add, though, that ocaml partial evaluation is more useful, because you can make (the equivalent of) a version of fprintf that prints to an arbitrary stream, not just stdout. Source: Here
Why do most JavaScript tutorials always say not to use for loops, is that not wrong, considering most algorithms are built around for loops?
You probably didn’t read the whole sentence. Most likely they’re saying “do not use for loops, instead use forEach/map/filter, when the performance budget allows for it”.
For example, like this:
const arr = [1, 2, 3, 4];
arr.forEach(n => console.log(n));
This prints all the numbers, without having to worry to get the start and end indices right and off-by-one errors and stuff. It’s basically a simplified version of this:
for (let i = 0; i < arr.length; i++) {
console.log(arr[i]);
}
Or, like this, to print only the even numbers:
arr.filter(n => n % 2 == 0).forEach(n => console.log(n));
Which is a simplified version of:
for (let i = 0; i < arr.length; i++) {
if (arr[i] % 2 == 0) {
console.log(arr[i]);
}
}
Obviously sometimes forEach() won’t do, e.g. when you need some more complex iteration instead of just going through every element, or when your code needs to run as fast as possible, e.g. because it’s part of a game engine, but, when you can use it, it’s definitely the better option, there’s just less room for mistakes with it. Source
What is the fastest scripting language on the server side?
Javascript (or more precisely ECMAScript). And it’s a lot faster than the others. Surprised?
When in 2009 I heard about Node.js, I though that people had lost their mind to use Javascript on the server side. But I had to change my mind.
Node.js is lighting fast. Why? First of all because it is async but with V8, the open source engine of Google Chrome, even the Javascript language itself become incredibly fast. The war of the browsers brought us hyper-optimized Javascript interpreters/compilers.
Regarding the language Javascript is not the most elegant language but it is definitely a lot better than what some people may think. The current version of Javascript (or better ECMAScript as specified in ECMA-262 5th edition) is good. If you adopt “use strict”, some strange and unwanted behaviors of the language are eliminated. Harmony, the codename for a future version, is going to be even better and add some extra syntactical sugar similar to some Python’s constructs.
Does Javascript still sound too archaic? Try Coffeescript (from the same author of Backbone.js) that compiles to Javascript. Coffescript makes cleaner, easier and more concise programming on environments that use Javascript (i.e. the browser and Node.js). It’s a relatively new language that is not perfect yet but it is getting better: http://coffeescript.org/
Why do tech giants like Google, Amazon, and Facebook use C++ for their back-end? What are the advantages of using C++ against other languages?
In general, the important advantage of C++ is that it uses computers very efficiently, and offers developers a lot of control over expensive operations like dynamic memory management. Writing in C++ versus Java or python is the difference between spinning up 1,000 cloud instances versus 10,000. The cost savings in electricity alone justifies the cost of hiring specialist programmers and dealing with the difficulties of writing good C++ code. Source
To a modern day programmer, would you recommend C++ over Rust? Why or why not?
You really need to understand C++ pretty well to have any idea why Rust is the way it is. If you only want to work at Mozilla, learn Rust. Otherwise learn C++ and then switch to Rust if it breaks out and becomes more popular.
Rust is one step forward and two steps back from C++. Embedding the notion of ownership in the language is an obvious improvement over C++. Yay. But Rust doesn’t have exceptions. Instead, it has a bunch of strange little features to provide the RAII’ish behavior that makes C++ really useful. I think on average people don’t know how to teach or how to use exceptions even still. It’s too soon to abandon this feature of C++. Source: Kurt Guntheroth
What is the most common field in computer science?
Java or Javascript-based web applications are the most common. (Yuk!) And, consequently, you’ll be a “dime a dozen” programmer if that’s what you do.
On the hand, (C++ or C) embedded system programming (i.e. hardware-based software), high-capacity backend servers in data centers, internet router software, factory automation/robotics software, and other operating system software are the least common, and consequently the most in demand. Source: Steven Ussery
Your first language doesn’t matter very much. Both Java and Python are common choices. Python is more immediately useful, I would say.
When you are learning to program, you are learning a whole bunch of things simultaneously:
How to program
How to debug programs that aren’t working
How to use programming tools
A language
How to learn programming languages
How to think about programming
How to manage your code so you don’t paint yourself into corners, or end up with an unmanageable mess
How to read documentation
Beginners often focus too much on their first language. It’s necessary, because you can’t learn any of the others without that, but you can’t learn how to learn languages without learning several… and that means any professional knows a bunch and can pick up more as required. Source: Andrew McGregor
Is it worth learning Java now that Node.js exists?
Absolutely.
If you’re a backend or full-stack engineer, it’s reasonable to focus on your preferred tech, but you’ll be expected to have at least some familiarity with Java, C#, Python, PHP, bash, Docker, HTML/CSS…
And, you need to be good with SQL.
That’s the minimum you should achieve.
The more you know, the more employable — and valuable to your employer or clients — you will be.
Also, languages and platforms are tools. Some tools are more appropriate to some tasks than others.
That means sometimes Node.js is the preferred choice to meet the requirements, and sometimes Java is a better choice — after considering the inevitable trade-offs with every technical decision. Source: Dave Voohis
Which language should I learn for back-end web development in 2022?
Just one?
No, no, that’s not how it works.
To be a competent back-end developer, you need to know at least one of the major, core, back-end programming languages — Java (and its major frameworks, Spring and Hibernate) and/or C# (and its major frameworks, .NET Core and Entity Framework.)
You might want to have passing familiarity with the up-and-coming Go.
You need to know SQL. You can’t even begin to do back-end development without it. But don’t bother learning NoSQL tools until you need to use them.
You should be familiar with the major cloud platforms, AWS and Azure. Others you can pick up if and as needed.
Know Linux, because most back-end infrastructure runs on Linux and you’ll eventually encounter it, even if it’s often hived away into various cloud-based services.
You should know Python and bash scripts. Understand Apache Web Server configuration. Be familiar with Nginx, and if you’re using Java, have some understanding of how Apache Tomcat works.
Understand containerization. Be good with Docker.
Be familiar with JavaScript and HTML/CSS. You might not have to write them, but you’ll need to support front-end devs and work with them and understand what they do. If you do any Node.js (some of us do a lot, some do none), you’ll need to know JavaScript and/or TypeScript and understand Node.
That’ll do for a start.
But even more important than the above, learn computer science.
Learn it, and you’ll learn that programming languages are implementations of fundamental principles that don’t change, whilst programming languages come and go.
Learn those fundamental principles, and it won’t matter what languages are in the market — you’ll be able to pick up any of them as needed and use them productively. Source: Dave Voohis
As someone new to programming, how do I know that I have fully understood Python syntax and I’m ready to move on to the next step in learning Python?
It sounds like you’re spending too much time studying Python and not enough time writing Python.
The only way to become good at any programming language — and programming in general — is to practice writing code.
It’s like learning to play a musical instrument: Practice is essential.
Try to write simple programs that do simple things. When you get them to work, write more complex programs to do more complex things.
When you get stuck, read documentation, tutorials and other peoples’ code to help you get unstuck.
If you’re still stuck, set aside what you’re stuck on and work on a different program.
But keep writing code. Write a lot of code.
The more code you write, the easier it will become to write more code. Source: Dave Voohis
What is the best language to learn how to code? I’m learning Python. Is it the best to start?
It depends on what you want to do.
If you want to just mess around with programming as a hobby, it’s fine. In fact, it’s pretty good. Since it’s “batteries included”, you can often get a lot done in just a few lines of code. Learn Python 3, not 2.
If you want to be a professional software engineer, Python’s a poor place to start. It’s syntax isn’t terrible, but it’s weird. It’s take on OO is different from almost all other OO languages. It’ll teach you bad habits that you’ll have to unlearn when switching to another language.
If you want to eventually be a professional software engineer, learn another OO language first. I prefer C#, but Java’s a great choice too. If you don’t care about OO, C is a great choice. Nearly all major languages inherited their syntax from C, so most other languages will look familiar if you start there.
C++ is a stretch these days. Learn another OO language first. You’ll probably eventually have to learn JavaScript, but don’t start there. It… just don’t.
So, ya. If you just want to do some hobby coding and write some short scripts and utilities, Python’s fine. If you want to eventually be a pro SE, look elsewhere. Source: Chris Nash
Do you need to master all the small details when learning programming? I want to master C++ do I need to read the whole book sequentially giving care to each and every small detail?
You master a language by using it, not just reading about it and memorizing trivia. You’ll pick up and internalize plenty of trivia anyway while getting real world work done.
Reading books and blogs and whatnot helps, but those are more meaningful if you have real world problems to apply the material to. Otherwise, much of it is likely to go into your eyeballs and ooze right back out of your ears, metaphorically speaking.
I usually don’t dig into all the low level details when reading a programming book, unless it’s specifically needed for a problem I am trying to solve. Or, it caught my curiosity, in which case, satisfying my curiosity is the problem I am trying to solve.
Once you learn the basics, use books and other resources to accelerate you on your journey. What to read, and when, will largely be driven by what you decide to work on.
Bjarne Stroustrup, the creator of C++, has this to say:
And no, I’m not a walking C++ dictionary. I do not keep every technical detail in my head at all times. If I did that, I would be a much poorer programmer. I do keep the main points straight in my head most of the time, and I do know where to find the details when I need them.
Why does software engineering pay so much more than other engineering jobs/careers?
Scale. There is no field other than software where a company can have 2 billion customers, and do it with only a few tens of thousands of employees. The only others that come close are petroleum and banking – both of which are also very highly paid. By David Seidman
What’s the best code you’ve seen a professional programmer write? How does it compare to the average programmer?
Professional programmer’s code:
//Here we address strange issue that was seen on
//production a few times, but is not reproduced
//localy. User can be mysteriously logged out after
//clicking Back button. This seems related to recent
//changes to redirect scheme upon order confirmation.
login(currentUser());
Average programmer’s code:
//Hotfix – don’t ask
login(currentUser());
Professional programmer’s commit message:
Fix memory leak in connection pool
We’ve seen connections leaking from the pool
if any query had already been executed through
it and then exception is thrown.
The root causes was found in ConnectionPool.addExceptionHook()
After first few years of programming, when the urge to put some cool looking construct only you can understand into every block of code wears off, you’ll likely come to the conclusion that these examples are actually the code you want to encounter when opening a new project.
If we look at the apps written by good vs average programmers (not talking about total beginners) the code itself is not that much different, but if small conveniences everywhere allow you to avoid frustration while reading it – it is likely written by a professional.
The only valid measurement of code quality is the WTFs/minutes.
Why is Fortran chosen over C/C++ for simulation software in computational physics?
I worked as an academic in physics for about 10 years, and used Fortran for much of that time. I had to learn Fortran for the job, as I was already fluent in C/C++.
The prevalence of Fortran in computational physics comes down to three factors:
Performance. Yes, Fortran code is typically faster than C/C++ code. One of the main reasons for this is that Fortran compilers are heavily optimised towards making fast code, and the Fortran language spec is designed such that compilers will know what to optimise. It’s possible to make your C program as fast as a Fortran one, but it’s considerably more work to do so.
Convenience. Imagine you want to add a scalar to an array of values – this is the sort of thing we do all the time in physics. In C you’d either need to rely on an external library, or you’d need to write a function for this (leading to verbose code). In Fortran you just add them together, and the scalar is broadcasted across all elements of the array. You can do the same with multiplication and addition of two arrays as well. Fortran was originally the Formula-translator, and therefore makes math operations easy.
Legacy. When you start a PhD, you’re often given some ex-post-doc’s (or professor’s) code as a starting point. Often times this code will be in Fortran (either because of the age of the person, or because they were given Fortran code). Unfortunately sometimes this code is F77, which means that we still have people in their 20s learning F77 (which I think is just wrong these days, as it gives Fortran as a whole a bad name). Source: Erlend Davidson
If a pointer is just a variable that contains memory, then why does it need to know the type of its values?
My friend, if you like C, you are gonna looooove B. B was C’s predecessor language. It’s a lot like C, but for C, Thompson and Ritchie added in data types. Basically, C is for lazy programmers. The only data type in B was determined by the size of a word on the host system. B was for “real-men programmers” who ate Hollerith cards for extra fiber, chewed iron into memory cores when they ran out of RAM, and dreamed in hexadecimal. Variables are evaluated contextually in B, and it doesn’t matter what the hell they contain; they are treated as though they hold integers in integer operations, and as though they hold memory addresses in pointer operations. Basically, B has all of the terseness of an assembly language, without all of the useful tooling that comes along with assembly.
As others indicate, pointers do not hold memory; they hold memory addresses. They are typed because before you go to that memory address, you probably want to know what’s there. Among other issues, how big is “there”? Should you read eight bits? Sixteen? Thirty-two? More? Inquiring minds want to know! Of course, it would also be nice to know whether the element at that address is an individual element or one element in an array, but C is for “slightly real less real men programmers” than B. Java does fully differentiate between scalars and arrays, and therefore is clearly for the weak minded. /jk Source: Joshua Gross
Hidden Features of C#
What are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know?
This isn’t C# per se, but I haven’t seen anyone who really uses System.IO.Path.Combine() to the extent that they should. In fact, the whole Path class is really useful, but no one uses it!
lambdas and type inference are underrated. Lambdas can have multiple statements and they double as a compatible delegate object automatically (just make sure the signature match) as in:
When normalizing strings, it is highly recommended that you use ToUpperInvariant instead of ToLowerInvariant because Microsoft has optimized the code for performing uppercase comparisons.
I remember one time my coworker always changed strings to uppercase before comparing. I’ve always wondered why he does that because I feel it’s more “natural” to convert to lowercase first. After reading the book now I know why.
My favorite trick is using the null coalesce operator and parentheses to automagically instantiate collections for me.
private IList<Foo> _foo;
public IList<Foo> ListOfFoo
{ get { return _foo ?? (_foo = new List<Foo>()); } }
Here are some interesting hidden C# features, in the form of undocumented C# keywords:
__makeref
__reftype
__refvalue
__arglist
These are undocumented C# keywords (even Visual Studio recognizes them!) that were added to for a more efficient boxing/unboxing prior to generics. They work in coordination with the System.TypedReference struct.
There’s also __arglist, which is used for variable length parameter lists.
One thing folks don’t know much about is System.WeakReference — a very useful class that keeps track of an object but still allows the garbage collector to collect it.
The most useful “hidden” feature would be the yield return keyword. It’s not really hidden, but a lot of folks don’t know about it. LINQ is built atop this; it allows for delay-executed queries by generating a state machine under the hood. Raymond Chen recently posted about the internal, gritty details.
Using @ for variable names that are keywords.
var @object = newobject();
var @string = "";
var @if = IpsoFacto();
If you want to exit your program without calling any finally blocks or finalizers use FailFast:
IANA has registered the official MIME Type for JSON as application/json.
When asked about why not text/json, Crockford seems to have said JSON is not really JavaScript nor text and also IANA was more likely to hand out application/* than text/*.
JSON (JavaScript Object Notation) and JSONP (“JSON with padding”) formats seems to be very similar and therefore it might be very confusing which MIME type they should be using. Even though the formats are similar, there are some subtle differences between them.
So whenever in any doubts, I have a very simple approach (which works perfectly fine in most cases), namely, go and check corresponding RFC document.
JSONRFC 4627 (The application/json Media Type for JavaScript Object Notation (JSON)) is a specifications of JSON format. It says in section 6, that the MIME media type for JSON text is
application/json.
JSONP JSONP (“JSON with padding”) is handled different way than JSON, in a browser. JSONP is treated as a regular JavaScript script and therefore it should use application/javascript, the current official MIME type for JavaScript. In many cases, however, text/javascript MIME type will work fine too.
Note that text/javascript has been marked as obsolete by RFC 4329 (Scripting Media Types) document and it is recommended to use application/javascript type instead. However, due to legacy reasons, text/javascript is still widely used and it has cross-browser support (which is not always a case with application/javascript MIME type, especially with older browsers).
What are some mistakes to avoid while learning programming?
Over use of the GOTO statement. Most schools teach this is a NO;NO
Not commenting your code with proper documentation – what exactly does the code do??
Endless LOOP. A structured loop that has NO EXIT point
Overwriting memory – destroying data and/or code. Especially with Dynamic Allocation;Stacks;Queues
Not following discipline – Requirements, Design, Code, Test, Implementation
Moreover complex code should have a BLUEPRINT – Design. That is like saying let’s build a house without a floor plan. Code/Programs that have a requirements and design specification BEFORE writing code tends to have a LOWER error rate. Less time debugging and fixing errors. Source: QUora
The thing that always struck me is that the best programmers I would meet or read all had a couple of things in common.
They didn’t use IDEs, preferring Emacs or Vim.
They all learned or used Functional Programming (Lisp, Haskel, Ocaml)
They all wrote or endorsed some kind of testing, even if it’s just minimal TDD.
They avoided fads and dependencies like a plague.
It is a basic truth that learning Lisp, or any functional programming, will fundamentally change the way you program and think about programming. Source: Quora
Which is better among pair programming and test-driven development?
The two work well together. Both are effective at what they do:
Pairing is a continuous code review, with a human-powered ‘auto suggest’. If you like github copilot, pairing does that with a real brain behind it.
TDD forces you to think about how your code will be used early on in the process. That gives you the chance to code things so they are clear and easy to use
Both of these are ‘shift-left’ activities. In the days of old, code review and testing happened after the code was written. Design happened up front, but separate to coding, so you never got to see if the design was actually codeable properly. By shifting these activities to before the code gets written, we get a much faster feedback loop. That enables us to make corrections and improvements as we go.
Neither is better than each other. They target different parts of the coding challenge. By Alan Mellor
Do software engineers ever feel the need to have more than 2 monitors when they are coding?
Yes, I’ve found that three can be very helpful, especially these days.
Monitor 1: IDE full screen
Monitor 2: Google, JIRA ticket, documentation. Manual Test tools
Monitor 3: Zoom/Teams/Slack/Outlook for general comms
That third monitor becomes almost essential if you are remote pairing, and wnat to see your collaborator n real-time.
My current work is teaching groups in our academy. That also benefits from three monitors: Presenter view, participant view, zoom for chat and hands ups in the group.
I can get away with two monitors. I can even do it with a £3 HDMI fake monitor USB plug. Neither is quite as effective. Source: Alan Mellor
How do you use classes interchangeably when the properties are different (C#, OOP, design patterns, development)?
You make the properties not different. And the key way to do that is by removing the properties completely.
Instead, you tell your objects to do some behaviour.
Say we have three classes full of different data that all needs adding to some report. Make an interface like this:
interface IReportSource {
void includeIn( Report r );
}
so here, all your classes with different data will implement this interface. We can call the method ‘includeIn’ on each of them. We pass in a concrete class Report to that method. This will be the report that is being generated.
Then your first class which used to look like
class ALoadOfData {
get; set; name
get; set; quantity
}
(forgive the rusty/pseudo C# syntax please)
can be translated into:
class ARealObject : IReportSource {
private string name ;
private int quantity ;
void includeIn( Report r ) {
r.addBasicItem( name, quantity );
}
}
You can see how the properties are no longer exposed. They remain encapsulated in the object, available for use inside our includeIn() method. That is now polymorphic, and you would write a custom includeIn() for each kind of class implementing IReportSource. It can then call a suitable method on the Report class, with a suitable number of properties (now hidden; so just fields). By Alan Mellor
2- Bloom filter: Bit array of m bits, initially all set to 0.
To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.
To check if an item is in the set, compute the k indices and check if they are all set to 1.
Of course, this gives some probability of false-positives (according to wikipedia it’s about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.
Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.
3- Rope: It’s a string that allows for cheap prepends, substrings, middle insertions and appends. I’ve really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.
Wikipedia A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).
They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.
If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT’s Introduction to Algorithms lecture on them.
Also, here is a Java applet demonstrating Skip Lists visually.
5– Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.
Bit Arrays store individual bits compactly and allow fast bit operations.
6-Zippers– derivatives of data structures that modify the structure to have a natural notion of ‘cursor’ — current location. These are really useful as they guarantee indicies cannot be out of bound — used, e.g. in the xmonad window manager to track which window has focused.
9- Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it’s a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it’s not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.
10- A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here’s how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v’s neighbor list and for v in u’s neighbor list. Both have size at most 6, so this is O(1).
By the above algorithm, if u and v are neighbors, you won’t have both u in v’s list and v in u’s list. If you need this, just add each node’s missing neighbors to that node’s neighbor list, but store how much of the neighbor list you need to look through for fast lookup.
11-Lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked. They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.
Mike Acton’s (often provocative) blog has some excellent articles on lock-free design and approaches
12- I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan’s Function, if I recall my data structures class correctly).
They’re used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra’s algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.
14- Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it’s the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.
Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.
In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.
Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.
16- Have a look at Finger Trees, especially if you’re a fan of the previously mentioned purely functional data structures. They’re a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.
Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.
A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.
18- I’m surprised no one has mentioned Merkle trees (ie. Hash Trees).
Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.
19- <zvrba> Van Emde-Boas trees
I think it’d be useful to know why they’re cool. In general, the question “why” is the most important to ask 😉
My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.
20- An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.
Is there any way to make interpreted languages such as Python just as fast as C++? Why or why not?
Variable names in languages like Python are not bound to storage locations until run time. That means you have to look up each name to find out what storage it is bound to and what its type is before you can apply an operation like “+” to it. In C++, names are bound to storage at compile time, so no lookup is needed, and the type is fixed at compile time so the compiler can generate machine code with no overhead for interpretation. Late-bound languages will never be as fast as languages bound at compile time.
You could make a language that looks kinda like Python that is compile-time bound and statically typed. You could incrementally compile such a language. But you can also build an environment that incrementally compiles C++ so it would feel a lot like using Python. Try godbolt or tutorialspoint if you want to see this actually working for small programs.
I want to be a computer programmer when I grow up but I don’t have a high IQ. What should I do?
Originally Answered: I wnat to be become a computer programmer when I grow up but I don’t have a high IQ what do I do?
Have I got good news for you! No one has ever asked me my IQ, nor have I ever asked anyone for their IQ. This was true when I was a software engineer, and is true now that I’m a computer scientist.
Try to learn to program. If you can learn in an appropriate environment (a class with a good instructor), go from there. If you fail the first time, adjust your learning approach and try again. If you still can’t, find another future; you probably wouldn’t like computer programming, anyway. If you learn later, that’s fine.
Which are the hardest C++ concepts beginners struggle to understand? How would you have explained them?
Beginners to C++ will consistently struggle with getting a C++ program off the ground. Even “Hello World” can be a challenge. Making a GUI in C++ from scratch? Almost impossible in the beginning.
These 4 areas cannot be learned by any beginner to C++ in 1 day or even 1 month in most cases. These areas challenge nearly all beginners and I have seen cases where it can take a few months to teach.
These are the most fundamental things you need to be able to do to build and produce a program in C++.
Basic Challenge #1: Creating a Program File
Compiling and linking, even in an IDE.
Project settings in an IDE for C++ projects.
Make files, scripts, environment variables affecting compilation.
Basic Challenge #2: Using Other People’s C++ Code
Going outside the STL and using libraries.
Proper library paths in source, file path during compile.
You cannot explain any of them in a way that most persons will pick up right away. You can describe these things by way of analogy, you can even have learners mirror you at the same time you demonstrate them. I’ve done similar things with trainees in a work setting. In the end, it usually requires time on the order of months and years to pick up these things.
What is a list of programming languages ordered from easiest to hardest to learn?
As a professional compiler writer and a student of computers languages and computer architecture this question needs a deeper analysis.
I would proposed the following taxonomy:
1. Assembly code,
2. Implementation languages,
3. Low Level languages and
4. High Level Languages.
Assembly code is where one-for-one translation between source and code.
Macro processors were invented to improve productivity. But to debug a one-for-one listing is needed. The next questions is “What is the hardest Assembly code?” I would vote for the x86–32. It is a very byzantine architecture with a number of mistakes and miss steps. Fortunately the x86–64 cleans up many of these errors.
Implementation languages are languages that are architecture specific but allow a more statement like expression.
There is no “semantic gap” between Assembly code and the machine. Bliss, PL360, and at the first versions of C were in this category. They required the same understanding of the machine as assembly without the pain of assembly. These are hard languages. The semantic gap is only one of syntax.
Next are the Low Level Languages.
Modern “c” firmly fits here. These are languages who’s design was molded about the limitations of computer architecture. FORTRAN, C, Pascal, and Basic are archetypes of these languages. These are easier to learn and use than Assembly and Implementation language. They all have a “Run Time Library” that maintain an execution environment.
As a note, LISP has some syntax, CAR and CDR, which are left over from the IBM 704 it was first implemented on.
Last are the “High Level Languages”.
Languages that require extensive runtime environment to support. Except for Algol, require a “garbage collector” for efficient memory support. The languages are: Algol, SNOBOL4, LISP (and it variants), Java, Smalltalk, Python, Ruby, and Prolog.
Which of these is hardest? I would vote for Prolog with LISP being second. Why? The logical process of “Resolution” has taken me some time learn. Mastery is a long ways away. It is harder than Assembly code? Yes and no. I would never attempt a problem I use Prolog for in Assembly. The order of effort is too big. I find I spend hours writing 20 lines of Prolog which replaces hundreds of lines of SNOBOL4. LISP can be hard unless you have intelligent editors and other tools. In one sense LISP is an “assembly language for an AI machine” and Prolog is “assembly language for a logic machine.” Both Prolog and LISP are very powerful languages. I find it takes deep mental effort to write code in both. But code does wonderful things!
Where and what are they (physically in a real computer’s memory)?
To what extent are they controlled by the OS or language run-time?
What is their scope?
What determines the size of each of them?
What makes one faster?
The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.
The heap is memory set aside for dynamic allocation. Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
To answer your questions directly:
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
What is their scope?
The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.
What determines the size of each of them?
The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.
Variables created on the stack will go out of scope and are automatically deallocated.
Much faster to allocate in comparison to variables on the heap.
Implemented with an actual stack data structure.
Stores local data, return addresses, used for parameter passing.
Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
Data created on the stack can be used without pointers.
You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
Usually has a maximum size already determined when your program starts.
Heap:
Stored in computer RAM just like the stack.
In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with delete, delete[], or free.
Slower to allocate in comparison to variables on the stack.
Used on demand to allocate a block of data for use by the program.
Can have fragmentation when there are a lot of allocations and deallocations.
In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
Can have allocation failures if too big of a buffer is requested to be allocated.
You would use the heap if you don’t know exactly how much data you will need at run time or if you need to allocate a lot of data.
Responsible for memory leaks.
Example:
intfoo(){
char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).bool b = true; // Allocated on the stack.if(b)
{
//Create 500 bytes on the stackchar buffer[500];
//Create 500 bytes on the heap
pBuffer = newchar[500];
}//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;
he most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.
In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).
The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.
In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear ‘top’ item.
Heap allocation requires maintaining a full record of what memory is allocated and what isn’t, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting – identifying at runtime when memory is no longer in scope and deallocating it.
These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!
To what extent are they controlled by the OS or language runtime?
As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn’t really have runtime control over it; it’s determined by the programming language, OS and even the system architecture.
A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.
What is their scope?
The call stack is such a low level concept that it doesn’t relate to ‘scope’ in the sense of programming. If you disassemble some code you’ll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you’d expect it to work given how your programming languages work. In a heap, it’s also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a “scope” is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).
What determines the size of each of them?
Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don’t store huge chunks of data on the stack, so it’ll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, “stack overflow”) or other unusual programming decisions.
A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don’t normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn’t use memory that you haven’t allocated yet or memory that you have freed.
What makes one faster?
The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What’s more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.
Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.
The heap
The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block.
As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.
The stack
The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don’t be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.
Can a function be allocated on the heap instead of a stack?
No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.
How the heap is managed is really up to the runtime environment. C uses malloc and C++ uses new, but many other languages have garbage collection.
However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn’t too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.
In the following C# code
publicvoidMethod1()
{
int i = 4;
int y = 2;
class1 cls1 = new class1();
}
Here’s how the memory is managed
Local Variables that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don’t really know up front but we expect them to last a while. In most languages it’s critical that we know at compile time how large a variable is if we want to store it on the stack.
Objects (which vary in size as we update them) go on the heap because we don’t know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.
In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you’re not dealing with pointers.
The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.
Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).
The Heap The heap is a generic name for where you put the data that you create on the fly. If you don’t know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.
Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are – memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).
Implementation Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.
This is only practical if your memory usage is quite different from the norm – i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.
Physical location in memory This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implementation specific) and frankly not important.
In Short
A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer’s RAM.
In Detail
The Stack
The stack is a “LIFO” (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is “pushed” onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.
The advantage of using the stack to store variables, is that memory is managed for you. You don’t have to allocate memory by hand, or free it once you don’t need it any more. What’s more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.
The heap is a region of your computer’s memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don’t need it any more.
If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won’t be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.
Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.
Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.
Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.
Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.
You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.
In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
“You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.”
The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.
Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.
How about implementing something like SO does with the CAPTCHAs?
If you’re using the site normally, you’ll probably never see one. If you happen to reload the same page too often, post successive comments too quickly, or something else that triggers an alarm, make them prove they’re human. In your case, this would probably be constant reloads of the same page, following every link on a page quickly, or filling in an order form too fast to be human.
If they fail the check x times in a row (say, 2 or 3), give that IP a timeout or other such measure. Then at the end of the timeout, dump them back to the check again.
Since you have unregistered users accessing the site, you do have only IPs to go on. You can issue sessions to each browser and track that way if you wish. And, of course, throw up a human-check if too many sessions are being (re-)created in succession (in case a bot keeps deleting the cookie).
As far as catching too many innocents, you can put up a disclaimer on the human-check page: “This page may also appear if too many anonymous users are viewing our site from the same location. We encourage you to register or login to avoid this.” (Adjust the wording appropriately.)
Besides, what are the odds that X people are loading the same page(s) at the same time from one IP? If they’re high, maybe you need a different trigger mechanism for your bot alarm.
Edit: Another option is if they fail too many times, and you’re confident about the product’s demand, to block them and make them personally CALL you to remove the block.
Having people call does seem like an asinine measure, but it makes sure there’s a human somewhere behind the computer. The key is to have the block only be in place for a condition which should almost never happen unless it’s a bot (e.g. fail the check multiple times in a row). Then it FORCES human interaction – to pick up the phone.
In response to the comment of having them call me, there’s obviously that tradeoff here. Are you worried enough about ensuring your users are human to accept a couple phone calls when they go on sale? If I were so concerned about a product getting to human users, I’d have to make this decision, perhaps sacrificing a (small) bit of my time in the process.
Since it seems like you’re determined to not let bots get the upper hand/slam your site, I believe the phone may be a good option. Since I don’t make a profit off your product, I have no interest in receiving these calls. Were you to share some of that profit, however, I may become interested. As this is your product, you have to decide how much you care and implement accordingly.
The other ways of releasing the block just aren’t as effective: a timeout (but they’d get to slam your site again after, rinse-repeat), a long timeout (if it was really a human trying to buy your product, they’d be SOL and punished for failing the check), email (easily done by bots), fax (same), or snail mail (takes too long).
You could, of course, instead have the timeout period increase per IP for each time they get a timeout. Just make sure you’re not punishing true humans inadvertently.
Is Assembly faster than C++?
The unsatisfying answer: Nearly every C++ compiler can output assembly language,* so assembly language can be exactly the same speed as C++ if you use C++ to develop the assembly code.
The more interesting answer: It’s highly unlikely that an application written entirely in assembly language remains faster than the same application written in C++ over the long run, even in the unlikely case it starts out faster.
Repeat after me: Assembly Language Isn’t Magic™.
For the nitty gritty details, I’ll just point you to some previous answers I’ve written, as well as some related questions, and at the end, an excellent answer from Christopher Clark:
Performance optimization strategies as a last resort
Let’s assume:
the code already is working correctly
the algorithms chosen are already optimal for the circumstances of the problem
the code has been measured, and the offending routines have been isolated
all attempts to optimize will also be measured to ensure they do not make matters worse
OK, you’re defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:
The first problem found was use of list clusters (now called “iterators” and “container classes”) accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds.
Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds.
Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.
Now I seem to have hit a wall. The samples are telling me exactly what it is doing, but I can’t seem to find anything that I can improve. Then I reflect on the basic design of the program, on its transaction-driven structure, and ask if all the list-searching that it is doing is actually mandated by the requirements of the problem.
Then I hit upon a re-design, where the program code is actually generated (via preprocessor macros) from a smaller set of source, and in which the program is not constantly figuring out things that the programmer knows are fairly predictable. In other words, don’t “interpret” the sequence of things to do, “compile” it.
That redesign is done, shrinking the source code by a factor of 4, and the time is reduced to 10 seconds.
Now, because it’s getting so quick, it’s hard to sample, so I give it 10 times as much work to do, but the following times are based on the original workload.
More diagnosis reveals that it is spending time in queue-management. In-lining these reduces the time to 7 seconds.
Now a big time-taker is the diagnostic printing I had been doing. Flush that – 4 seconds.
Now the biggest time-takers are calls to malloc and free. Recycle objects – 2.6 seconds.
Continuing to sample, I still find operations that are not strictly necessary – 1.1 seconds.
Total speedup factor: 43.6
Now no two programs are alike, but in non-toy software I’ve always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i or i++ or for(;;) or while(1) are faster: the kinds of questions I see so often on Stack Overflow.
P.S. It may be wondered why I didn’t use a profiler. The answer is that almost every one of these “problems” was a function call site, which stack samples pinpoint. Profilers, even today, are just barely coming around to the idea that statements and call instructions are more important to locate, and easier to fix, than whole functions.
I actually built a profiler to do this, but for a real down-and-dirty intimacy with what the code is doing, there’s no substitute for getting your fingers right in it. It is not an issue that the number of samples is small, because none of the problems being found are so tiny that they are easily missed.
ADDED: jerryjvl requested some examples. Here is the first problem. It consists of a small number of separate lines of code, together taking over half the time:
/* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)
These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with “information hiding” meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a “bottleneck” (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they “should have been avoided” (in hindsight). I hope that gives a bit of the flavor.
Here is the second problem, in two separate lines:
/* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)
These are building lists by appending items to their ends. (The fix was to collect the items in arrays, and build the lists all at once.) The interesting thing is that these statements only cost (i.e. were on the call stack) 3/48 of the original time, so they were not in fact a big problem at the beginning. However, after removing the first problem, they cost 3/20 of the time and so were now a “bigger fish”. In general, that’s how it goes.
I might add that this project was distilled from a real project I helped on. In that project, the performance problems were far more dramatic (as were the speedups), such as calling a database-access routine within an inner loop to see if a task was finished.
REFERENCE ADDED: The source code, both original and redesigned, can be found in www.ddj.com, for 1993, in file 9311.zip, files slug.asc and slug.zip.
EDIT 2011/11/26: There is now a SourceForge project containing source code in Visual C++ and a blow-by-blow description of how it was tuned. It only goes through the first half of the scenario described above, and it doesn’t follow exactly the same sequence, but still gets a 2-3 order of magnitude speedup.
Suggestions:
Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead. Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory.
Don’t use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it. Down-sides: writing additional code means more surface area for bugs.
Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!)
Cheat: in some cases although an exact calculation may exist for your problem, you may not need ‘exact’, sometimes an approximation may be ‘good enough’ and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%? Down-sides: Well… the answer won’t be exact.
When you can’t improve the performance any more – see if you can improve the perceived performance instead.
You may not be able to make your fooCalc algorithm faster, but often there are ways to make your application seem more responsive to the user.
A few examples:
anticipating what the user is going to request and start working on that before then
displaying results as they come in, instead of all at once at the end
Accurate progress meter
These won’t make your program faster, but it might make your users happier with the speed you have.
I spend most of my life in just this place. The broad strokes are to run your profiler and get it to record:
Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing.
Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
Sequential floating-point ops. Make these SIMD.
And one more thing I like to do:
Set your compiler to output assembly listings and look at what it emits for the hotspot functions in your code. All those clever optimizations that “a good compiler should be able to do for you automatically”? Chances are your actual compiler doesn’t do them. I’ve seen GCC emit truly WTF code.
More suggestions:
Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need.
Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking).
Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done.
Threaded I/O: For those daring enough, combine ‘I/O up-front’ or ‘Delay I/O’ with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.
graph algorithms in particular the Bellman Ford Algorithm
Scheduling algorithms the Round-Robin scheduling algorithm in particular.
Dynamic Programming algorithms the Knapsack fractional algorithm in particular.
Backtracking algorithms the 8-Queens algorithm in particular.
Greedy algorithms the Knapsack 0/1 algorithm in particular.
We use all these algorithms in our daily life in various forms at various places.
For example every shopkeeper applies anyone or more of the several scheduling algorithms to service his customers. Depending upon his service policy and situation. No one of the scheduling algorithm fits all the situations.
All of us mentally apply one of the graph algorithms when we plan the shortest route to be taken when we go out for doing multiple things in one trip.
All of us apply one of the Greedy algorithms while selecting career, job, girlfriends, friends etc.
All of us apply one of the Dynamic programming algorithms when we do simple multiplication mentally by referring to the various mathematical products table in our memory.
It uses TimSort, a sort algorithm which was invented by Tim Peters, and is now used in other languages such as Java.
TimSort is a complex algorithm which uses the best of many other algorithms, and has the advantage of being stable – in others words if two elements A & B are in the order A then B before the sort algorithm and those elements test equal during the sort, then the algorithm Guarantees that the result will maintain that A then B ordering.
That does mean for example if you want to say order a set of student scores by score and then name (so equal scores are ordered already alphabetically) then you can sort by name and then sort by score.
TimSort has good performance against data sets which are partially sorted or already sorted (areas where some other algorithms struggle).
Timsort – Wikipedia
Timsort was designed to take advantage of runs of consecutive ordered elements that already exist in most real-world data, natural runs . It iterates over the data collecting elements into runs and simultaneously putting those runs in a stack. Whenever the runs on the top of the stack match a merge criterion , they are merged. This goes on until all data is traversed; then, all runs are merged two at a time and only one sorted run remains.
https://en.m.wikipedia.org/wiki/Timsort
Run Your Python Code Online Here
I’m currently coding a SAT solver algorithm that will have to take millions of input data, and I was wondering if I should switch from Python to C.
Answer: Using best-of-class equivalent algorithms optimized compiled C code is often multiple orders of magnitude faster than Python code interpreted by CPython (the main Python implementation). Other Python implementations (like PyPy) might be a bit better, but not vastly so. Some computations fit Python better, but I have a feeling that a SAT solver implementation will not be competitive if written using Python.
All that said, do you need to write a new implementation? Could you use one of the excellent ones out there? CDCL implementations often do a good job, and there are various open-source ones readily available (e.g., this one: https://github.com/togatoga/togasat
Comments:
1- I mean, also it depends. I recall seeing an analysis some time ago, that showed CPython can be as fast as C … provided you are almost exclusively using library functions written in C. That being said, for any non-trivial python program it will probably be the case that you must spend quite a bit of time in the interpreter, and not in C library functions.
There are two main reasons: performance and familiarity. While Rust has been shown to be faster than C++, it’s not as fast as assembly language—and many developers have been working in assembly for so long that they’re not willing to give it up.
However, there’s another reason why some developers are sticking with C++: compiler optimization.
C++ compilers are more intelligent than Rust compilers when it comes to optimizing code for performance, so if you’re looking for top-notch performance from your application, then you might want to stick with C++ until the Rust compiler has caught up.
The C++ programming language definition is written in English and in other human languages. Programming language definitions are written for humans to read. They are not written in programming languages.
An actual implementation of a C++ compiler (or interpreter) can be written in any general-purpose programming language. Some are written in C, some are written in C++, some are written in other programming languages. Some are written with the help of compiler development tools and infrastructure (e.g., lex, yacc, flex, bison, antlr, LLVM, etc.). It just depends on the specific C++ implementation you’re looking at.
This is true of all high-level programming languages. Any general-purpose programming language can be used to implement a compiler or interpreter, no matter what programming language you are compiling or interpreting.
Learn other languages. It will broaden your perspective and hopefully make you a better developer.
Alan Perlis, one of the developers of ALGOL, once said, “A language that doesn’t affect the way you think about programming, is not worth knowing.”
Conversely, that implies learning other languages can and will affect the way you think about programming, provided you get some variety of exposure.
C++ is a multiparadigm language. But if you haven’t had exposure to those paradigms in a more focused setting, you might not understand the value they bring, or their strengths, weaknesses, idioms, and insights.
So even if you do the bulk of your programming in C++, you may not be using it the most effective way possible.
I know I personally have gaps, because I haven’t explored certain paradigms myself. I owe it to myself to at least dip my toe in some of them. I know this, because every time I learn a new language or environment, I sense a gap closing—a gap I may not have been aware of previously.
You don’t even need to spend a lot of time to gain value, either. I may have only spent a week with Scala, for example, but I learned more than just the base language from it. I hadn’t really encountered fold and match expressions as such basic and integral concepts, for example.
And despite its negative reputation, I found Perl to be an excellent language to learn about multiple programming techniques.
Mark Jason Dominus’ Higher Order Perl opened my eyes a number of techniques that I believe originated more from the LISP world.
Example: Partial Function Application
In Perl, you can implement partial function application (sometimes conflated with the related concept currying) with you eyes closed and one hand behind your back. Suppose I want to bind the first argument of foo():
my $f = sub { return foo(arg1, @_); };
Now I can invoke $f as a function with that first argument bound, with a slight syntax tweak: &$f(…) or $f->(…). I don’t even need to think about the rest.
Trying to learn about that for the first time in C++ likely would have lost the forest for the trees.
C++98 was quite primitive. It offered std::bind1st and std::bind2nd for 2-argument function objects only. Boost offered boost::bind,[1] which had its own limitations. And because these were relatively uncommon, they were unfamiliar to many C++ users (at least among the crowd I was in). C++ lambdas (introduced with C++11) help, but they don’t work for arbitrary arguments until C++14. For that, you probably need parameter packs,forwarding references, and std::forward.[2] And then there’s object lifetimes to consider, so for your bound arguments you might need to trade off between copy, move, capturing a reference, smart pointers,[3][4] etc. Oh, and finally, it won’t yield a function pointer, but rather an function object, so it’s not usable in places that need a pure function pointer. Although, if it manages to be capture-less, it can provide a pure function pointer by applying unary + to it…
Can you see how you might lose the forest for the trees here?
If you didn’t already have some idea of the usefulness of partial application, would you even try? If you hadn’t encountered the concept before, would it have even come to mind when you saw lambdas?
Punchline
In practice, if you’re already well versed in C++, it’s not actually all that difficult to implement techniques like partial application in C++. You’re already accustomed to the rigamarole described above, since C++ confronts you with those sorts of decisions regularly.
It does cloud things noticeably, however. Learning the concepts in a simpler environment separates you from the implementation noise.
Learn other languages and become a more rounded and hopefully better developer. Step away from C++’s innumerable trees of details to see different areas of the forest more clearly.
In C++, how can a template object be deleted with or without the delete keyword? (template <class T> class Obj;)
If you allocated it with new, then delete it by passing the pointer to delete, just like any other pointer. There’s nothing particularly special about a pointer to an object whose type happens to be a template.
Most of the time, though, you shouldn’t be calling new and delete directly.
Is there a way to prevent objects in a class to collect garbage in Java? If no, why?
Can you prevent objects being garbage collected? Yes. Retain a reference to them for the lifetime of the program. That will defer garbage collection until the program ends.
Is there a way to remove garbage collection? No.
Why?
Language design choice to simplify memory management
Typically garbage collection happens without issues
If your app struggles with garbage collection, that may point to a design revision being necessary, or maybe Java is not the right fit. I’ve not experienced that to date though.
Garbage collection is very mainstream now, being used in JavaScript, Typescript, Python, Kotlin, C#, Go, Swift, Lisp, Smalltalk, Clojure, Haskell. This is why I do not understand the issues with noobs and GC: it is bloody everywhere. In all your favourite “best languages”.
The only languages I know without garbage collection are C, C++ and Rust. Oh and Pascal, but that’s not mainstream at present.
So if your app is truly “the one” that cannot be solved with GC, then you’re probably learning C++, Rust or C. Rust is the most modern of these and the one I would recommend. I would probably use C++ and as I have some background in that. By Alan Mellor
When should “new” be used in C++?
new’s use should be confined to very narrow use-cases. Examples of use cases where new is ok:
Writing low-level memory management code such as allocators and deallocators, smart pointers, etc.
Working with code/libraries that uses outdated C++ programming idioms like QT — but then narrowly limited to the extent necessary to work with QT
You are going to need to preallocate an object to pass to an API that indicates it will assume ownership of (i.e. responsibility for deleting the object and delete). If you are going to work with that object at all before passing it off, you should not use new. (use a unique pointer and call .release(), when calling the API).
The way to dynamically allocate memory correctly in modern C++ is std::make_unique or std::make_shared. The first returns a unique_pointer to the allocated object (which will delete the object for you when it goes out of scope) or std::shared_pointer which can be copied around — the object will be deleted for you when there are no more copies of the shared pointer.
For most programming work, you don’t need and shouldn’t use “new” or even worse “malloc”.
Originally Answered: Why do array indices start with 0 (zero) in many programming languages?
Array indices should start at 0. This is not just an efficiency hack for ancient computers, or a reflection of the underlying memory model, or some other kind of historical accident—forget all of that. Zero-based indexing actually simplifies array-related math for the programmer, and simpler math leads to fewer bugs. Here are some examples.
Suppose you’re writing a hash table that maps each integer key to one of n buckets. If your array of buckets is indexed starting at 0, you can write bucket = key mod n; but if it’s indexed starting at 1, you have to write bucket = (key mod n) + 1.
Suppose you’re writing code to serialize a rectangular array of pixels, with width w and height h, to a file (which we’ll think of as a one-dimensional array of length w*h). With 0-indexed arrays, pixel (x, y) goes into position y*w + x; with 1-indexed arrays, pixel (x, y) goes into position y*w + x - w.
Suppose you want to put the letters ‘A’ through ‘Z’ into an array of length 26, and you have a function ord that maps a character to its ASCII value. With 0-indexed arrays, the character c is put at index ord(c) - ord(‘A’); with 1-indexed arrays, it’s put at index ord(c) - ord(‘A’) + 1.
It’s in fact one-based indexing that’s the historical accident—human languages needed numbers for “first”, “second”, etc. before we had invented zero. For a practical example of the kinds of problems this accident leads to, consider how the 1800s—well, no, actually, the period from January 1, 1801 through December 31, 1900—came to be known as the “19th century”.
Originally Answered: Knowing that Python is very slow compared to Java and C++, why do they mostly use Python for fast algorithmic procedures like machine learning?
No, almost no one uses Python libraries for machine learning.
Before you start listing counterexamples, notice the emphasized words. Yes, a lot of people use Python for machine learning, because it allows for very fast prototyping and overall exploration of problem space, but none of the libraries they are using for it are actually written in Python. Indeed, they are almost always written in either Fortran or C++ instead, and just interface with Python through some thin wrapper.
The slowness of Python is completely irrelevant if the only thing you do with it is invoking a library function written in highly-optimized C++.
Many companies have bet their stack on Java, so there’s demand for Java programmers.
The JVM is cross-platform, and uses run-time information to manage itself.
It takes care of memory management.
Java 8 has lambda expressions, and includes an impl of Javascript called Nashorn that runs on the JVM.
Static typing: Java is typesafe, and its static typing is essentially a form of self-documenting code.
Java is mature: It’s been around for 20 years, it’s fully backward compatible, and code written decades ago still works.
Android: Java 7 works on the world’s largest mobile OS.
For those and other reasons, Java is one of the world’s most widely used languages. Oracle says there are 10 million Java programmers worldwide. The Github stats from Eduardo Bonet speak volumes.
What important Java programming questions are asked during interviews?
What have you built using Java?
How did you design that thing? What were the key principles you followed?
How did you test that thing?
Java is an object oriented language. What design principles have you found helpful?
What does clean code mean to you?
How do you add new features while keeping existing code working in the CI pipeline?
These are the kinds of things I’m interested about. Many of them get covered as we work together on a simple programming kata.
It all boils down to ‘can you use Java and its tools to work alongside us on our team’. By Alan Mellor
If you understand loops, variables and conditionals only, that’s enough to hack out a FizzBuzz. If you’re a bit further along the path, you can write a cleaner FizzBuzz
The challenge itself is about writing fizz and buzz when a number is exactly divisible by 3 or 5. It’s not really important, except that it steers you to use those elements of programming above.
It can be done in any language as those concepts are foundational to every language.
Let me open with a quote that you’ve probably seen many times:
premature optimization is the root of all evil. — Donald Knuth
Programs are regularly gigantic. If you profile a program that isn’t fast enough, you’ll often find histograms that shows the top 1,000 functions all taking well under 0.1% of the execution time. “Optimizing” those 1,000 functions is usually not practical and would likely not achieve the desired speedup anyway.
The number of executed instructions is often relatively irrelevant. Instead, the number of cache misses is far more critical, but it’s also much harder to locate them. Avoiding cache misses is something that may require design work up front, because it affects core data structures.
Machines are highly heterogeneous, and extracting performance is not just a matter of dealing with the main CPU cores (which may not be homogeneous!), but also to arrange for efficient use of vector units, and accelerators like GPUs, media co-processors, and neural engines. Utilizing all those units is also something that may require design work up front.
Performance is not just a matter of execution time. It’s also a matter of energy consumption and scalability. And response time: More than ever, software is interactive, and yet has to deal with new kinds of latencies (e.g., from networking).
Software is an independent industry: If your version 1.0 is too slow, or uses to much battery life, or chokes your data center, you might not get a chance at developing an optimized version 1.5. (In 1974, software was mostly an add-on to hardware.)
Software is built from independent components: While developing a specific component, you might not know just how hard it will be pushed. If you don’t design for performance from the start you may end up painting yourself into a corner.
All that to say that Knuth’ quote should be taken for what it is: Don’t optimize local instruction counts early on. But don’t skip thinking about optimizing design and data structures from the start, because if performance matters in any way (throughput, latency, energy use, or scalability) it’s something that’s difficult or impossible to “retrofit”. Things to think about:
How will you evaluate performance? How will you track it during the development and maintenance process?
How can you avoid computation that’s not needed? This might mean to architect for “lazy evaluation”.
How will you lay out your data for efficient access (i.e., make best use of the memory hierarchy)?
How will you organize your algorithms and data structures to take advantage of the available computational resources?
When considering algorithms, what regime will they work in? A traditional example: “Fast” sorting algorithms are typically only preferable once there are enough elements (often 50+) to sort; if you know that you’ll be repeatedly sorting a dozen elements, those algorithms may not be your best option.
Are the complexities introduced to achieve better performance worth their overall (negative?) impact on the project?
When all that is handled adequately, you might eventually have to deal with “nitty gritty code optimization”, and it will have a chance to be meaningful.
Now, regarding the original question:
What do most programmers do (when optimizing code) that is essentially wrong?
I don’t think that’s generalizable. I think Knuth’s quote is often mis-construed… but I wouldn’t say that “most programmers” do that. I’m not even sure that “most programmers” optimize code at all. I also think that Knuth’s quote is often ignored, and that’s not great either… but again, I’d venture that it doesn’t involve “most programmers”. Programmers are a very diverse bunch, with many diverse roles, working on a great diversity of projects that may or may not have concrete performance constraints.
In other words, I think the question has no meaningful answer.
Finally, I’d like to close with a quote from the late Len Lattanzi (whom I had the pleasure of having as a colleague for a few years):
Belated pessimization is the leaf of no good. — Len Lattanzi
Umm, that’s really up to you. But there are some tradeoffs.
Java:
Pros:
Extremely widely used. You’ll never want for a job if you are good at it. Other languages (Scala, Kotlin, Groovy) run on the JVM as well. There is a lot of cool big data processing that you can use Java for (Apache Spark, Hadoop, etc.).
Cons:
Tons of bloatware (WebSphere, WebLogic, Adobe Experience Manager) runs on Java. You’re likely to end up coding up some legacy enterprise garbage. UIs written in Java are crap at best.
.NET:
Pros:
Well supported by Microsoft. Visual Studio is gorgeous.
Cons:
Not so many open source libraries, you’ll likely be coding for Windows. This means that your development machine will be Windows (dealbreaker for me). Also, no cool little startup will use .NET ever. Not as many jobs as Java. UIs written in .NET are crap at best.
Node.js
Pros:
Much more concise and faster to develop for than either .NET or Java. Almost as many open source libraries as there are for Java.
Cons:
Memory management, thread management, and overall performance aren’t as good as Java or .NET. You’ll have a harder time finding a Node.js job unless you also know a client side JS framework such as Vue.js or React.js. In that case, you’ll be very much in demand.
Others:
If you want to stick to server side coding, you should consider RUST and Golang. Both are more performant than any of the above. Benchmarks I’ve read suggest that RUST is overall more performant but that Golang has better concurrency management.
pass in two objects as collaborating parameters so methods can be called on them
The second way is good in OO. You do your calculation once, store the two results as state in an object, use two separate accessors in the calling code.
Why do some programmers not use “using namespace std” in C++?
Yup, what they said. When you say using namespace std, you potentially import hundreds or thousands of symbols into your code. They are symbols with short names like sort, find and get, that you might want to use for your own code. The actual number of symbols imported into your code depends on what headers you define, so your program might work today, and tomorrow when you add a new header, it might break. The list of symbols in namespace std is subject to change (that’s why it’s in a namespace), so your program might work under C++03, and break when you tried to compile it under C++11.
You can avoid all these hassles by using fully qualified names; std::cout instead of cout, std::sort() instead of sort(), etc. The more experienced the programmer, the more likely they are to do this.
You can also limit the scope of your namespace pollution by putting your using directive at function scope. By Kurt Guntheroth
Do all pointers have the same size in C++?
Theoretically, no. Not even for a given system. A char* may have a size different from an int*.
In practice, yes.
First, note that all pointers to object types (as opposed to function types) must be able to round-trip through void* (modulo cv-qualification). So if different object pointer types had different sizes, void* would have to be as large as the largest of them.
Second, for pointers to object types there aren’t many potential advantages to having them be of different size. Why make things complex if they can be made simple at no perceivable cost?
Third… plenty of reasonable code “out there” assumes that all pointers have the same size. So building an implementation where that’s not the case handicaps that implementation right out of the gate.
For function pointers it may actually sometimes be interesting from a performance point of view to give them twice the size of ordinary pointers, because they may have to encapsulate both the address of the function and the address of the associated data segment (in shared library models where a separate data segment is created for every shared library instance). However, because of compatibility considerations even those implementations just add an indirection to keep the function pointers compatible with void* (even though function pointers are not strictly required by the standard to round-trip through void*).
On the scale of bad programming, if is at the bottom of the list.
Compilers are very smart about these things. as an example, consider the alternatives
Both compile to the exact same code sequence, which does NOT include a branch:
Source code is supposed to be a way to express your intent to the computer. You really should write the source to be as clear as possible and leave the microoptimizations to the compiler. Once you get the program working and correct, then you can look at performance. Use profiling tools to figure out where the time is going and speed up the parts that are slow AND where being slow actually matters.
By the way, you shouldn’t be afraid of branches either. The branch prediction logic in modern processors is nearly telepathic. AMD is using neural nets inside the chip (!). The predictors will correctly guess what is going to happen more than 90% of the time.
The other answers are mistaken. This is a very common confusion. They describe statically typed language, not strongly typed language. There is a big difference.
Strongly typed vs weakly typed:
In strongly typed languages you get an error if the types do not match in an expression. It does not matter if the type is determined at compile time (static types) or runtime (dynamic types).
Both java and python are strongly typed. In both languages, you get an error if you try to add objects with unmatching types. For example, in python, you get an error if you try to add a number and a string:
>>> a = 10
>>> b = “hello”
>>> a + b
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’
In Python, you get this error at runtime. In Java, you would get a similar error at compile time. Most statically typed languages are also strongly typed.
The opposite of strongly typed language is weakly typed. In a weakly typed language, there are implicit type conversions. Instead of giving you an error, it will convert one of the values automatically and produce a result, even if such conversion loses data. This often leads to unexpected and unpredictable behavior.
Javascript is an example of a weakly typed language.
> let a = 10
> let b = “hello”
> a + b
’10hello’
Instead of an error, JavaScript will convert a to string and then concatenate the strings.
Static types vs dynamic types:
In a statically typed language, variables are bound types and may only hold data of that type. Typically you declare variables and specify the type of data that the variable has. In some languages, the type can be deduced from what you assign to it, but it still holds that the variable is bound to that type. For example, in java:
int a = 3;
a = “hello” // Error, a can only contain integers
in a dynamically typed language, variables may hold any type of data. The type of the data is simply determined by what gets assigned to the variable at runtime. Python is dynamically typed, for example:
a = 10
a = “hello”
# no problem, a first held an integer and then a string
Comments:
#1: Don’t confuse strongly typed with statically typed.
Python is dynamically typed and strongly typed. Javascript is dynamically typed and weakly typed. Java is statically typed and strongly typed. C is statically typed and weekly typed.
I also added a drawing that illustrates how strong and static typing relate to each other:
Python is dynamically typed because types are determined at runtime. The opposite of dynamically typed is statically typed (not strongly typed)
Python is strongly typed because it will give errors when types don’t match instead of performing implicit conversion. The opposite of strongly typed is weakly typed
Finalize() is not guaranteed to be called and the programmer has no control over what time or in what order finalizers are called.
They are useless and should be ignored.
A destructor is not part of Java. It is a C++ language feature with very precise definitions of when it will be called.
Comments:
1- Until we got to languages like Rust (with the Drop trait) and a few others was C++ the only language which had the destructor as a concept? I feel like other languages were inspired from that.
2- Many others manage memory for you, even predating C: COBOL, FORTRAN and so on. That’s another driver why there isn’t much attention to destructors
Mainly getting out of that procedural ‘function operates on parameters passed in’ mindset.
Tactically, the static can normally be moved onto one of the parameter objects. Or all the parameters become an object that the static moves to. A new object might be needed. Once done the static is now a fully fledged method on an object and is not static anymore.
I view this as a positive iterative step in discovering objects for a system.
For cases where a static makes sense (? none come to mind) then a good practice is to move it closer to where it is used either in the same package or on a class that is strongly related.
I avoid having global ‘Utils’ classes full of statics that are unrelated. That’s fairly basic design, keeping unrelated things separate. In this case, the SOLID ISP principle applies: segregate into smaller, more focused interfaces.
Not really. I use Python occasionally for “quick hacks” – programs that I’ll probably run once and then delete – also, because I use “blender” for 3D modeling and Python is it’s scripting language.
I used to write quite a bit of JavaScript for web programming but since WASM came along and allows me to run C++ at very nearly full speed inside a web browser, I write almost zero JavaScript these days.
I use C++ for almost everything.
Once you get to know C++ it’s no harder than Python – the main thing I find great about Python is the number of easy-to-find libraries.
But in AAA games – the poor performance of Python pretty much rules it out.
In embedded systems, the computer is generally too small to fit a Python interpreter into memory – so C or C++ is a more likely choice.
JavaScript is a scripting language, that was developed by EMCA’s Technical Committee and Brendan Eich. It works perfectly in web-browsers without the help of any web-server or a compiler. It allows you to change HTML and CSS in the browsers without a full page reload. That is why it is used to create dynamic and interactive web pages.
TypeScript is a superset of the JavaScript language. It was presented and developed by Microsoft technical fellow Anders Hejlsberg in 2012. Typescript has appeared for a certain reason: the more JavaScript grew, the heavier and more unreadable js code became. It turned up especially evident when developers started to use JavaScript for server-side technologies.
TypeScript is an open-source language that has a compiler, that converts TypeScript code to JavaScript code (see TypeScript playground service). That compiler is cross-browser and also open-source. To start using TypeScript, you can rename your .js files to .ts files, and if there are no logical mistakes in the js code, you get valid TypeScript code. So, TypeScript code Is JavaScript code (and vice versa) just with some additions. To learn more about those additions, watch the original video presentation of TypeScript. Meanwhile, we discuss the key differences between JS and TS in 2022.
I think TypeScript *is* pretty popular, within the constraints it has.
Node.js is 1.8% of websites, and TypeScript is seldom used outside of Node.js. That really means TypeScript has limited potential for use there.
You can use TypeScript on the client-side, but it can be a pain to set up, and unless you have quite a lot of client-side logic, it might not be worth it.
Personally, I think TypeScript on the client-side is well worth the effort, but not really worth it on the server side, where there are so many options outside of a JS runtime.
I don’t think anybody says JavaScript is a dead language. I think its long term future is pretty bleak though, for two reasons:
TypeScript.
WebAssembly.
The entire Internet doesn’t run on JavaScript, in fact hardly any of it does, what you mean is the *web*. The web and the Internet are two different things, and while JavaScript is of course ubiquitous in web sites, practically no Internet infrastructure is using JavaScript.
If you consider the Internet to be the road infrastructure and cars, the web is the screaming babies in the back seats.
Unless you can write really good TypeScript code, you’re probably better off sticking to JavaScript – if you have that option of course.
The main advantage of JS vs TS in an interview is that equivalent code will be much quicker to write with JS, as you don’t have to write type annotations and what not. The time that you have to spend mechanically writing code is not negligible and time is off the essence.
Then again, the better you are at TypeScript, the less this will make a difference. Also, in TypeScript there are more ways to write functionally equivalent code, so when you’re really great at TS you’re more susceptible to pick the very best way to express what you want to do, so your expertise and good coding style is more evident. Finally, with good TS you should be able to avoid writing some tests that may be necessary in JS, and your coding style is naturally more defensive, which is good.
Originally Answered: If you build a huge website like eBay, Amazon, Facebook today which technology stack and language would you choose: Java/springboot, c#/.netcore, PHP, python, typescript/reactjs/Node.js (backendMySQL,Linux and frontend JavaScript is mostly fixed)?
Of those, TypeScript/Node.js/React is an easy answer. Though I’d also strongly recommend TypeScript on the frontend as well. If you skip Redux and instead use React Hooks you should find that TypeScript is a good fit.
But I wouldn’t use MySQL. PostgreSQL is stronger on almost every axis at this point, and given the lack of specificity of the purpose of the web site, I wouldn’t even necessarily recommend PostgreSQL over a half dozen other types of database.
Listen, if you want to design a web site such that it can grow, you need to make key technology choices strategically. If you’re using PostgreSQL, you can nearly seamlessly switch to CockroachDB, for instance, for much easier distributed database performance. Unless your database needs support for Geo-indexing, in which case you might need to split data between CockroachDB and MongoDB (edit: CockroachDB added Geo-index support!). Or if your website would benefit from a graph database, maybe OrientDB would be best.
Designing a website architecture is something that should be done by experienced experts. And the design goes deeper than just the technology choices. You need an architect who knows how to coordinate the architecture and the data flow your specific app will require. Otherwise you could paint yourself into a corner and end up with a site that’s failing at load with no easy path to fixing it, just at the point when your users are asking for more features.
A common cop-out inspired by the agile community is to claim that you just “ignore” the design and optimize later, but the truth is that many services that rely on that approach simply fail when they start to get traction.
Ironically, given your list of companies to be like, Facebook largely succeeded because a previous successful competitor, Friendster, couldn’t keep up with its expansion. The architecture had too many bottlenecks for them to scale horizontally, and they started hemorrhaging users by the thousands when the users found the site to be unresponsive too often. So if you want to be a Facebook, then plan for scaling from the start; otherwise the odds are good you’ll be a Friendster instead.
Not that Facebook necessarily planned it out in advance. I suspect they were instead just lucky. But “being lucky” isn’t a business plan.
I want to code a very basic cloud storage website like Dropbox (website only) using Javascript. What do I need to know? Any frameworks, libraries, tools I need to know?
In addition to the web site code, you’ll need:
Some kind of storage. AWS S3 is the normal solution, but Google and Azure and other services offer storage as well. There are good JavaScript APIs for both.
User account storage. AWS has Cognito, which I find a bit opaque, but Google Firebase has a pretty easy to use user database. Or you can roll your own user management.
You probably want server functions. AWS Lambda or Google Firebase functions will work.
I recommend using TypeScript, because I always recommend TypeScript. But you can do all the above with JavaScript.
It’s a bit overkill for a really basic Dropbox app, but I like RedwoodJS at this point. It doesn’t really help with the online storage part, but it will make it easier to deploy your server functions to a serverless backend. By Tim Mensh
How do microservices deal with relationships between tables and transactions where every service has its own database?
Ideally, microservices should be disjoint in all aspects, neither making reference to each other or to common resources like shared databases.
So if transactions need to cross multiple microservices’ calls, or if they need to join tables, maybe you have a case for combining several would-be microservices into one.
Or maybe you have a case for sharing databases across microservices.
Or maybe you should use disjoint databases with one of various strategies for implementing distributed transactions.
Or maybe you have a case for not using microservices at all.
A major benefit of microservices is that you can develop them independently — which facilitates scaling development — and you can run them independently and often multiply or redundantly, which facilitates run-time scaling.
That means microservices can be a solution to some problems, but not all. If they add more problems then they solve, or add more complexity than they’re worth, don’t get stuck using microservices for their own sake or because they’re the latest trend.
Can’t go wrong with any of those, really. I personally don’t care too much for the Node solution, but it’s plenty capable (if you can stomach that whole JS ecosystem thing)
What is a simple C++ program to find the average of 2 numbers?
This was actually one of the interview questions I got when I applied at Google.
“Write a function that returns the average of two number.”
So I did, they way you would expect. (x+y)/2. I did it as a C++ template so it works for any kind of number.
interviewer: “What’s wrong with it?”
Well, I suppose there could be an overflow if adding the two numbers requires more than space than the numeric type can hold. So I rewrote it as (x/2) + (y/2).
interviewer: “What’s wrong with it now?”
Well, I think we are losing a little precision by pre-dividing. So I wrote it another way.
interviewer: “What’s wrong with it now?”
And that went on for about 10 minutes. It ended with us talking about the heat death of the universe.
I got the job and ended up working with the guy. He said he had never done that before. He had just wanted to see what would happen.
Comments:
1-
The big problem you get with x/2 + y/2 is that it can/will give incorrect answers for integer inputs. For example, let’s average 3 and 3. The result should obviously be 3.
But with integer division, 3/2 = 1, and 1+1 = 2.
You need to add one to the result if and only if both inputs are odd.
2- Here’s what I’d do in C++ for integers, which I believe does the right thing including getting the rounding direction correct, and it can likely be made into a template that will do the right thing as well. This is not complete code, but I believe it gets the details correct…
That will work for any signed or unsigned integer type for op1 and op2 as long as they have the same type.
If you want it to do something intelligently where one of the operands is an unsigned type and the other one is a signed type, you could do it, but you need to define exactly what should happen, and realize that it’s quite likely that for maximum arithmetic correctness, the output type may need to be different than either input type. For instance, the average of a uint32_t and an int32_t can be too large to fit in an int32_t, and it can also be too small to fit in a uint32_t, so you probably need to go with a larger signed integer type, maybe int64_t.
3- I would have answered the question with a question, “Tell me more about the input, error handling capability of your system, and is this typical of the level of challenge here at google?” Then I’d provide eye contact, sit back, and see what happens. Years ago I had an interview question that asked what classical problem was part of a pen plotter control system. I told the interviewer that it was TSP but that if you had to change pens, you had to consider how much time it took to switch. They offered me a job but I declined given the poor financial condition of the company (SGI) which I discovered by asking the interviewer questions of my own. IMO: questions are at the heart of engineering. The interviewer, if they are smart, wants to see if you are capable of discovering the true nature of their problems. The best programmers I’ve ever worked with were able to get to the heart of problems and trade off solutions. Coding is a small part of the required skills.
Can two servers have the same public IP address?
Yes, they can.
There are features in HTTP to allow many different web sites to be served on a single IP address.
You can, if you are careful, assign the same IP address to many machines (it typically can’t be their only IP address, however, as distinguishable addresses make them much easier to manage).
You can run arbitrary server tasks on your many machines with the same IP address if you have some way of sending client connections to the correct machine. Obviously that can’t be the IP address, because they’re all the same. But there are ways.
However… this needs to be carefully planned. There are many issues. Andrew Mc Gregor
What are some algorithms that computer hardware advances have made obsolete?
It depends on how you want to store and access data.
For the most part, as a general concept, old school cryptography is obsolete.
It was based on ciphers, which were based on it being mathematically “hard” to crack.
If you can throw a compute cluster at DES, even with a one byte “salt”, it’s pretty easy to crack a password database in seconds. Minutes, if your cluster is small.
Almost all computer security is base on big number theory. Today, that’s called: Law of large numbers – Wikipedia
Averages of repeated trials converge to the expected value An illustration of the law of large numbers using a particular run of rolls of a single die . As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. Although each run would show a distinctive shape over a small number of throws (at the left), over a large number of rolls (to the right) the shapes would be extremely similar. In probability theory , the law of large numbers ( LLN ) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. [1] The LLN is important because it guarantees stable long-term results for the averages of some random events.
What it means is that it’s hard to do math on very large numbers, and so if you have a large one, the larger the better.
Most cryptography today is based on elliptic curves.
But we know by the proof of Fermat’s last theorem, and specifically, the Taniyama-Shimura conjecture, is that all elliptic curves have modular forms.
And so this gives us an attack at all modern cryptogrphay, using graphical mathematics.
It’s an interesting field, and problem space.
Not one I’m interested in solving, since I’m sure it has already been solved by my “associates” who now work for the NSA.
I am only interested in new problems.
Comments:
1- Sorry, but this is just wrong. “Almost all cryptography,” counted by number of bytes encrypted and decrypted, uses AES. AES does not use “large numbers,” elliptic curves, or anything of that sort – it’s essentially combinatorial in nature, with a lot of bit-diddling – though there is some group theory at its based. The same can be said about cryptographic checksums such as the SHA series, including the latest “sponge” constructions.
Where RSA and elliptic curves and such come in is public key cryptography. This is important in setting up connections, but for multiple reasons (performance – but also for excellent cryptographic reasons) is not use for bulk encryption. There are related algorithms like Diffie-Hellman and some signature protocols like DSS. All of these “use large numbers” in some sense, but even that’s pushing it – elliptic curve cryptography involves doing math over … points on an elliptic curve, which does lead you to do some arithmetic, but the big advantage of elliptic curves is that the numbers are way, way smaller than for, say, RSA for equivalent security.
Much research these days is on “post-quantum cryptography” – cryptography that is secure against attacks by quantum computers (assuming we ever make those work). These tend not to be based on “arithmetic” in any straightforward sense – the ones that seem to be at the forefront these days are based on computation over lattices.
Cracking a password database that uses DES is so far away from what cryptography today is about that it’s not even related. Yes, the original Unix implementations – almost 50 years ago – used that approach. So?
C++ lambda functions are syntactic sugar for a longstanding set of practices in both C and C++: passing a function as an argument to another function, and possibly connecting a little bit of state to it.
This goes way back. Look at C’s qsort():
That last argument is a function pointer to a comparison function. You could use a captureless lambda for the same purpose in modern C++.
Sometimes, you want to tack a little bit of extra state alongside the function. In C, one way to do this is to provide an additional context pointer alongside the the function pointer. The context pointer will get passed back to the function as an argument.
In C++, that context pointer can be this. When you do that, you have something called a function object. (Side note: function objects were sometimes called functors; however, functors aren’t really the same thing.)
If you overload the function call operator for a particular class, then objects of that class behave as function objects. That is, you can pretend like the object is a function by putting parentheses and an argument list after the name of an instance! When you arrive at the overloaded operator implementation, this will point at the instance.
Instances of this class will add an offset to an integer. The function call operator is operator() below.
and to use it:
That’ll print out the numbers 42, 43, 44, … 51 on separate lines.
And tying this back to the qsort() example from earlier: C++’s std::sort can take a function object for its comparison operator.
Modern C++’s lambda functions are syntactic sugar for function objects. They declare a class with an unutterable name, and then give you an instance of that class. Under the hood, the class’ constructor implements the capture, and initializes any state variables.
Other languages have similar constructs. I believe this one originated in LISP. It goes waaaay back.
As for any challenges associated with them: lifetime management. You potentially introduce a non-nested lifetime for any state associated with the callback, function object, or lambda.
If it’s all self contained (i.e. it keeps its own copies of everything), you’re less likely to have a problem. It owns all the state it relies on.
If it has non-owning pointers or references to other objects, you need to ensure the lifetime of your callback/function object/lambda remains within the lifetime of that other non-owned object. If that non-owned object’s lifetime isn’t naturally a superset of the callback/function object/lambda, you should consider taking a copy of that object, or reconsider your design.
Visual Studio Code is OK if you can’t find anything better for the language you’re using. There are better alternatives for most popular languages.
C# – Use Visual Studio Community, it’s free, and far better than Visual Studio Code.
Java – Use IntelliJ
Go – Goland.
Python – PyCharm.
C or C++ – CLion.
If you’re using a more unusual language, maybe Rust, Visual Studio Code might be a good choice.
Comments:
#1: Just chipping in here. I used to be a massive visual studio fan boy and loved my fancy gui for doing things without knowing what was actually happening. I’ve been using vscode and Linux for a few years now and am really enjoying the bare metal exposure you get with working on it (and linux) typing commands is way faster to get things done than mouse clicking through a bunch of guis. Both are good though.
#2: C# is unusual in that it’s the only language which doesn’t follow the maxim, “if JetBrains have blessed your language with attention, use their IDE”.
Visual Studio really is first class.
#3: for Rust as long as you have rust-analyzer and clippy, you’re good to go. Vim with lua and VS Code both work perfectly.
#4: This is definitely skirting the realm of opinion. It’s a great piece of software. There is better and worse stuff but it all depends upon the person using it, their skill, and style of development.
#5: VSCode is excellent for coding. I’ve been using it for about 6 years now, mainly for Python work, but also developing JS based mobile apps. I mainly use Visual Studio, but VSC’s slightly stripped back nature has been embellished with plenty of updates and more GUI discovery methods, plus that huge extensions library (I’ve worked with the creation of an intellisense style plugin as well).
I’m personally a fan of keeping it simple on IDEs, and I work in a lot of languages. I’m not installing 6 or 7 IDEs because they apparently have advantages in that specific language, so I’d rather install one IDE which can do a credible job on all of them.
I’m more a fan of developing software than getting anally retentive about knowing all the keyboard shortcuts to format a source file. Life’s too short for that. Way too short!
Dmitry Aliev is correct that this was introduced into the language before references.
I’ll take this question as an excuse to add a bit more color to this.
C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this pointer parameter to member functions. E.g.:
was translated to something like:
int f__1S(S *this);
(the funny name f__1S is just an example of a possible “mangling” of the name of S::f, which allows traditional linkers to deal with the richer naming environment of C++).
What might comes as a surprise to the modern C++ programmer is that in that model this is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:
Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:
That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this remained valid in constructors and destructors even though this had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.
When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:
In other words, the expression this is now effectively a kind of alias for &__this, where __this is just a name I made up for an unnamable implicit reference parameter.
C++11 further tweaked this by introducing syntax to control the kind of reference that this is bound from. E.g.,
That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this:
After that language feature was released, we started getting many reports of buggy programs that “captured” this thinking they captured the class value, when instead they really wanted to capture __this (or *this). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:
we introduced the ability to capture *this
we allowed [=, this] since now [this] is really a “by reference” capture of *this
even though [this] was now a “by reference” capture, we left in the ability to write [&, this], despite it being redundant (compatibility with earlier standards)
Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):
In that example, the “object parameter” (i.e., the previously hidden reference parameter __this) is now an explicit parameter and it is no longer a reference!
Here is another example (also from the paper):
Here:
the type of the object parameter is a deducible template-dependent type
the deduction actually allows a derived type to be found
This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.
It adds some extra steps in design, testing and deployment for sure. But it can buy you an easier path to scalability and an easier path to fault tolerance and live system upgrades.
It’s not REST itself that enables that. But if you use REST you will have split your code up into independently deployable chunks called services.
So more development work to do, yes, but you get something a single monolith can’t provide. If you need that, then the REST service approach is a quick way to doing it.
We must compare like for like in terms of results for questions like this.
Based on what I could find, the strtok library function appeared in System III UNIX some time in 1980.
In 1980, memory was small, and programs were single threaded. I don’t know whether UNIX had any support for multiple processors, even. I think that happened a few years later.
This was 3 years before they started the standardization process, and 9 years before it was standardized in ANSI C.
This was simple and good enough, and that’s what mattered most. It’s far from the only library function with internal state.
And Lex/YACC took over more complex scanning and parsing tasks, so it probably didn’t get a lot of attention for the lightweight uses it was put to.
For a tongue-in-cheek take on how UNIX and C were developed, read this classic:
The Rise of “Worse is Better” By Richard Gabriel I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase “the right thing.” To such a designer it is important to get all of the following characteristics right: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation. · Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed. · Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness. I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the “MIT approach.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation. The worse-is-better philosophy is only slightly different: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design. · Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct. · Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface. Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the “New Jersey approach.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach. Let me start out by retelling a story that shows that the MIT/New-Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better.
Because the ‘under the hood’ code is about 50 years old. I’m not kidding. I worked on some video poker machines that were made in the early 1970’s.
Here’s how they work.
You have an array of ‘cards’ from 0 to 51. Pick one at random. Slap it in position 1 and take it out of your array. Do the same for the next card … see how this works?
Video poker machines are really that simple. They literally simulate a deck of cards.
Anything else, at least in Nevada, is illegal. Let me rephrase that, it is ILLEGAL, in all caps.
If you were to try to make a video poker game (or video keno, or slot machine) in any other way than as close to truly random selection from an ‘array’ of options as you can get, Nevada Gaming will come after you so hard and fast, your third cousin twice removed will have their ears ring for a week.
That is if the Families don’t get you first, and they’re far less kind.
All the ‘magic’ is in the payout tables, which on video poker and keno are literally posted on every machine. If you can read them, you can figure out exactly what the payout odds are for any machine.
There’s also a little note at the bottom stating that the video poker machine you’re looking at uses a 52 card deck.
Comments:
1- I have a slot machine and the code on the odds chip looks much like an excel spread sheet every combination is displayed in this spread sheet, so the exact odds can be listed an payout tables. The machine picks a random number. Let say 452 in 1000. the computer looks at the spread sheet and says that this is the combination of bar bar 7 and you get 2 credits for this combination. The wheels will spin to match the indication on the spread sheet. If I go into the game diagnostics I can see if it is a win or not, you do not win on what the wheels display, but the actual number from the spread sheet. The games knows if you won or lost before the wheels stop.
2- I had a conversation with a guy who had retired from working in casino security. He was also responsible for some setup and maintenance on slot machines, video poker and others. I asked about the infamous video poker machine that a programmer at the manufacturer had put in a backdoor so he and a few pals could get money. That was just before he’d started but he knew how it was done. IIRC there was a 25 step process of combinations of coin drops and button presses to make the machine hit a royal flush to pay the jackpot.
Slot machines that have mechanical reels actually run very large virtual reels. The physical reels have position encoders so the electronics and software can select which symbol to stop on. This makes for far more possible combinations than relying on the space available on the physical reels.
Those islands of machines with the sign that says 95% payout? Well, you guess which machine in the group is set to that payout % while the rest are much closer to the minimum allowed.
Machines with a video screen that gives you a choice of things to select by touch or button press? It doesn’t matter what you select, the outcome is pre-determined. For example, if there’s a grid of spots and the first three matches you get determines how many free spins you get, if the code stopped on giving you 7 free spins, out of a possible maximum of 25, you’re getting 7 free spins no matter which spots you touch. It will tease you with a couple of 25s, a 10 or 15 or two, but ultimately you’ll get three 7s, and often the 3rd 25 will be close to the other two or right next to the last 7 “you” selected to make you feel like you just missed it when the full grid is briefly revealed.
There was a Discovery Channel show where the host used various power tools to literally hack things apart to show their insides and how they worked. In one episode he sawed open a couple of slot machines, one from the 1960’s and a purely mechanical one from the 1930’s or possibly 1940’s. In that old machine he discovered the casino it had been in decades prior had installed a cheat. There was a metal wedge bolted into the notch for the 7 on one reel so it could never hit the 777 jackpot. I wondered if the Nevada Gaming Commission could trace the serial number and if they could levy a fine if the company that had owned and operated it was still in business.
3- Slightly off-topic. I worked for a company that sold computer hardware, one of our customers was the company that makes gambling machines. They said that they spent close to $0 on software and all their budget on licensing characters
This question is like asking why you would ever use int when you have the Integer class. Java programmers seem especially zealous about everything needing to be wrapped, and wrapped, and wrapped.
Yes, ArrayList<Integer> does everything that int[] does and more… but sometimes all you need to do is swat a fly, and you just need a flyswatter, not a machine-gun.
Did you know that in order to convert int[] to ArrrayList<Integer>, the system has to go through the array elements one at a time and box them, which means creating a garbage-collected object on the heap (i.e. Integer) for each individual int in the array? That’s right; if you just use int[], then only one memory alloc is needed, as opposed to one for each item.
I understand that most Java programmers don’t know about that, and the ones who do probably don’t care. They will say that this isn’t going to be the reason your program is running slowly. They will say that if you need to care about those kinds of optimizations, then you should be writing code in C++ rather than Java. Yadda yadda yadda, I’ve heard it all before. Personally though, I think that you should know, and should care, because it just seems wasteful to me. Why dynamically allocate n individual objects when you could just have a contiguous block in memory? I don’t like waste.
I also happen to know that if you have a blasé attitude about performance in general, then you’re apt to be the sort of programmer who unknowingly, unnecessarily writes four nested loops and then has no idea why their program took ten minutes to run even though the list was only 100 elements long. At that point, not even C++ will save you from your inefficiently written code. There’s a slippery slope here.
I believe that a software developer is a sort of craftsman. They should understand their craft, not only at the language level, but also how it works internally. They should convert int[] to ArrayList<Integer> only because they know the cost is insignificant, and they have a particular reason for doing so other than “I never use arrays, ArrayList is better LOL”.
Last time I needed to write an Android app, even though I already knew Java, I still went with Kotlin 😀
I’d rather work in a language I don’t know than… Java… and yes, I know a decent Java IDE can auto-generate this code – but this only solves the problem of writing the code, it doesn’t solve the problem of having to read it, which happens a lot more than writing it.
I mean, which of the below conveys the programmer’s intent more clearly, and which one would you rather read when you forget what a part of the program does and need a refresher:
Even if both of them required no effort to write… the Java version is pure brain poison…
If you have two books on the same subject, but one is skinny and the other is fat, go with the skinny one. For example:
The book on the left has 796 pages; the book on the right a mere 176. Yet the book on the right told us everything we needed to know to write our own, efficient, native-code-generating Plain English compiler in Plain English:
The Osmosian Order of Plain English Programmers Welcomes You
Program in a language you already know
https://osmosianplainenglishprogramming.blog/
Compare also the Inside Macintosh documentation before and after the Pascal programmers were replaced with C programmers:
Note that the whole set (green arrow) documenting the slim and trim Pascal system was the same size as a single volume (red arrow) of the bloated C version.
Because it’s insufficient to deal with the memory semantics of current computers. In fact, it was obsolete almost as soon as it first became available.
Volatile tells a compiler that it may not assume the value of a memory location has not changed between reads or writes. This is sometimes sufficient to deal with memory-mapped hardware registers, which is what it was originally for.
But that doesn’t deal with the semantics of a multiprocessor machine’s cache, where a memory location might be written and read from several different places, and we need to be sure we know when written values will be observable relative to control flow in the writing thread.
Instead, we need to deal with acquire/release semantics of values, and the compilers have to output the right machine instructions that we get those semantics from the real machines. So, the atomic memory intrinsics come to the rescue. This is also why inline assembler acts as an optimization barrier; before there were intrinsics for this, it was done with inline assembler. But intrinsics are better, because the compiler can still do some optimization with them.
C++ is a programming language specified through a standard that is “abstract” in various ways. For example, that standard doesn’t currently formally recognize a notion of “runtime” (I would actually like to change that a little bit in the future, but we’ll see).
Now, in order to allow implementations to make assumptions it removes certain situations from the responsibility of the implementation. For example, it doesn’t require (in general) that the implementation ensure that accesses to objects are within the bounds of those objects. By dropping that requirement, the code for valid accesses can be more efficient than would be required if out-of-bounds situations were the responsibility of the implementation (as is the case in most other modern programming languages). Those “situations” are what we call “undefined behaviour”: The implementation has no specific responsibilities and so the standard allows “anything” to happen. This is in part why C++ is still very successful in applications that call for the efficient use of hardware resources.
Note, however, that the standard doesn’t disallow an implementation from doing something that is implementation-specified in those “undefined behaviour” situations. It’s perfectly all right (and feasible) for a C++ implementation to be “memory safe” for example (e.g., not attempt access outside of object bounds). Such implementations have existed in the past (and might still exist, but I’m not currently aware of one that completely “contains” undefined behaviour).
ADDENDUM (July 16th, 2021):
The following article about undefined behavior crossed my metaphorical desk today:
Coding is a process of translating and transforming a problem into a step by step set of instructions for a machine. Just like every skill, it requires time and practice to learn coding. However, by following some simple tips, you can make the learning process easier and faster. First, it is important to start with the basics. Do not try to learn too many programming languages at once. It is better to focus on one language and master it before moving on to the next one. Second, make use of resources such as books, online tutorials, and coding bootcamps. These can provide you with the structure and support you need to progress quickly. Finally, practice regularly and find a mentor who can offer guidance and feedback. By following these tips, you can develop the programming skills you need to succeed in your career.
There are plenty of resources available to help you improve your coding skills. Check out some of our favorite coding tips below:
– Find a good code editor and learn its shortcuts. This will save you time in the long run. – Do lots of practice exercises. It’s important to get comfortable with the syntax and structure of your chosen programming language. – Get involved in the coding community. There are many online forums and groups where programmers can ask questions, share advice, and collaborate on projects. – Read code written by experienced developers. This will give you insight into best practices and advanced techniques.
Just had a great interview and found someone amazing? Good for you. But what if I tell you that there is still a 50/50 chance this person…Continue reading on FrontSpot »
Last week, I caught myself copying an entire authentication system from GitHub Copilot without understanding half of what it generated…Continue reading on Medium »
is thrilled to announce that it will be joining the TON FranceSTON.fi Ecosystem Meetup on December 10. This event promises to be an…Continue reading on Medium »
is thrilled to announce that it will be joining the TON FranceSTON.fi Ecosystem Meetup on December 10. This event promises to be an…Continue reading on Medium »
As a Java developer working on web backends, mapping between different types of Java objects is a frequent and essential task. Whether…Continue reading on Medium »
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
A tab is not made out of spaces. It is a tab, whether in Java, Python, Rust, or generic text file editing. It is represented by a single Unicode character, U+0009.
It does not generally mean “insert this many spaces here” either. It means “put the cursor at the next closest tab stop in the line”. What that means exactly depends on the context. On an old typewriter I had, a tab key would advance the roller to the next column that was a multiple of 10
That is pretty much the same function as the tab character does.
Just for reference, modern text editors have their tab stops set to either every 4 or every 8 characters. That doesn’t mean that 1 tab = 4/8 spaces, that means that putting in a tab will align the cursor with the next multiple of 4/8 columns
In mainstream IDEs you can set the tab key to insert a desired number of spaces instead of a tab character.
The concept of tab independent of space is rarely used these days. In any case, what the character represents is decoupled from what the key does is decoupled from what the screen shows.
In many IDEs, the tab character inserts the required number of spaces to advance to the next tab line. This is often a default.
I imagine it’s a compromise between tab loving extremists and space advocates. The ideal whitespace character is a subject of intense debate among programmers.
Just had a great interview and found someone amazing? Good for you. But what if I tell you that there is still a 50/50 chance this person…Continue reading on FrontSpot »
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
Definition: jQuery is a lightweight, “write less, do more”, JavaScript library. jQuery is a cross-platform JavaScript library designed to simplify the client-side scripting of HTML.
Purpose The purpose of jQuery is to make it much easier to use JavaScript on your website. jQuery’s syntax is designed to make it easier to navigate a document, select DOM elements, create animations, handle events, and develop Ajax applications jQuery also simplifies a lot of the complicated things from JavaScript, like AJAX calls and DOM manipulation.
What is in it The jQuery library contains the following features:
HTML/DOM manipulation
CSS manipulation
HTML event methods
Effects and animations
AJAX
Utilities
Advantages: The modular approach to the jQuery library allows the creation of powerful dynamic web pages and web applications. jQuery has plugins for almost any task out there.
jQuery Syntax: The jQuery syntax is tailor made for selecting HTML elements and performing some action on the element(s).
Download the AI & Machine Learning For Dummies PRO App: iOS - Android Our AI and Machine Learning For Dummies PRO App can help you Ace the following AI and Machine Learning certifications:
JSON is a lightweight text-based open standard designed for human-readable data. It is the most widely used format for exchanging data on the web. It originates from the JavaScript language and is represented with two primary data structures: ordered lists (recognized as ‘arrays’) and name/value pairs (recognized as ‘objects’).
Why JSON?
The JSON standard is language-independent and its data structures, arrays and objects, are universally recognized. These structures are supported in some way by nearly all modern programming languages and are familiar to nearly all programmers. These qualities make it an ideal format for data interchange on the web.
JSON vs XML
The XML specification does not match the data model for most programming languages which makes it slow and tedious for programmers to parse. Compared to JSON, XML has a low data-to-markup ratio which results in it being more difficult for humans to read and write.
JSON Data Types
Number{ “myNum”: 123.456 } A series of numbers; decimals ok; double-precision floating-point format.
String{ “myString”: “abcdef” } A series of characters (letters, numbers, or symbols); double-quoted UTF-8 with backslash escaping.
Boolean{ “myBool”: true } True or false.
Array{ “myArray”: [ “a”, “b”, “c”, “d” ] } Sequence of comma-separated values (any data type); enclosed in square brackets.
Object{ “myObject”: { “id”: 7 } }; Unordered collection of comma-separated key/value pairs; enclosed in curly braces; properties (keys) are distinct strings.
Null{ “myNull”: null } Variable with null (empty) value.
Unsupported Data Types
Undefinedvar myUndefined; Variable with no value assigned.
Datevar myDate = new Date(); Object used to work with dates and times.
Errorvar myError = new Error(); Object containing information about errors.
Regular Expressionvar myRegEx = /json/i; Variable containing a sequence of characters that form a search pattern.
Functionvar myFunction = function(){}; Variable containing a block of code designed to perform a particular task.
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.
What do you think of the list? What would you add? LeBron James scores 40,000 career points Mondo Duplantis smashes Olympic pole vault records Spain’s historic Euro 2024 victory, featuring - - Lamine Yamal’s stunning debut Rafael Nadal bids farewell to tennis with an emotional retirement Novak Djokovic finally captures Olympic gold in Paris Caitlin Clark and Angel Reese redefine women’s basketball and its impact Record-breaking Super Bowl LVIII captivates millions The AFC Asian Cup and AFCON showcase football’s global influence Simone Biles makes a triumphant Olympic comeback with record-breaking performances Steph Curry delivers an unforgettable Olympic final performance submitted by /u/bakenzo [link] [comments]