Programming, Coding and Algorithms Questions and Answers

Popular Programming Languages

This blog is an aggregate of  clever questions and answers about Programming, Coding, and Algorithms. This is a safe place for programmers who are interested in optimizing their code, learning to code for the first time, or just want to be surrounded by the coding environment. 

 

I think, the most common mistakes I witnessed or made myself when learning is:

1: Trying to memorize every language construction. Do not rely on your memory, use stackoverflow.

2022 AWS Cloud Practitioner Exam Preparation

2: Spend a lot of time solving an issue yourself, before you google it. Just about every issue you can stumble upon, is in 99.99% cases already has been solved by someone else. Learn to properly search for solutions first.

3: Spending a couple of days on a task and realizing it was not worth it. If the time you spend on a single problem is more than halve an hour then you probably doing it wrong, search for alternatives.

4: Writing code from a scratch. Do not reinvent a bicycle, if you need to write a blog, just search a demo application in a language and a framework you chose, and build your logic on top of it. Need some other feature? Search another demo incorporating this feature, and use its code.

In programming you need to be smart, prioritize your time wisely. Diving in a deep loopholes will not earn you good money.

List of Freely available programming books – What is the single most influential book every Programmers should read

  • Bjarne Stroustrup – The C++ Programming Language
  • Brian W. KernighanRob Pike – The Practice of Programming
  • Donald Knuth – The Art of Computer Programming
  • Ellen Ullman – Close to the Machine
  • Ellis Horowitz – Fundamentals of Computer Algorithms
  • Eric Raymond – The Art of Unix Programming
  • Gerald M. Weinberg – The Psychology of Computer Programming
  • James Gosling – The Java Programming Language
  • Joel Spolsky – The Best Software Writing I
  • Keith Curtis – After the Software Wars
  • Richard M. Stallman – Free Software, Free Society
  • Richard P. Gabriel – Patterns of Software
  • Richard P. Gabriel – Innovation Happens Elsewhere
  • Code Complete (2nd edition) by Steve McConnell
  • The Pragmatic Programmer
  • Structure and Interpretation of Computer Programs
  • The C Programming Language by Kernighan and Ritchie
  • Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
  • Design Patterns by the Gang of Four
  • Refactoring: Improving the Design of Existing Code
  • The Mythical Man Month
  • The Art of Computer Programming by Donald Knuth
  • Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
  • Gödel, Escher, Bach by Douglas Hofstadter
  • Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
  • Effective C++
  • More Effective C++
  • CODE by Charles Petzold
  • Programming Pearls by Jon Bentley
  • Working Effectively with Legacy Code by Michael C. Feathers
  • Peopleware by Demarco and Lister
  • Coders at Work by Peter Seibel
  • Surely You’re Joking, Mr. Feynman!
  • Effective Java 2nd edition
  • Patterns of Enterprise Application Architecture by Martin Fowler
  • The Little Schemer
  • The Seasoned Schemer
  • Why’s (Poignant) Guide to Ruby
  • The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
  • The Art of Unix Programming
  • Test-Driven Development: By Example by Kent Beck
  • Practices of an Agile Developer
  • Don’t Make Me Think
  • Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
  • Domain Driven Designs by Eric Evans
  • The Design of Everyday Things by Donald Norman
  • Modern C++ Design by Andrei Alexandrescu
  • Best Software Writing I by Joel Spolsky
  • The Practice of Programming by Kernighan and Pike
  • Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
  • Software Estimation: Demystifying the Black Art by Steve McConnel
  • The Passionate Programmer (My Job Went To India) by Chad Fowler
  • Hackers: Heroes of the Computer Revolution
  • Algorithms + Data Structures = Programs
  • Writing Solid Code
  • JavaScript – The Good Parts
  • Getting Real by 37 Signals
  • Foundations of Programming by Karl Seguin
  • Computer Graphics: Principles and Practice in C (2nd Edition)
  • Thinking in Java by Bruce Eckel
  • The Elements of Computing Systems
  • Refactoring to Patterns by Joshua Kerievsky
  • Modern Operating Systems by Andrew S. Tanenbaum
  • The Annotated Turing
  • Things That Make Us Smart by Donald Norman
  • The Timeless Way of Building by Christopher Alexander
  • The Deadline: A Novel About Project Management by Tom DeMarco
  • The C++ Programming Language (3rd edition) by Stroustrup
  • Patterns of Enterprise Application Architecture
  • Computer Systems – A Programmer’s Perspective
  • Agile Principles, Patterns, and Practices in C# by Robert C. Martin
  • Growing Object-Oriented Software, Guided by Tests
  • Framework Design Guidelines by Brad Abrams
  • Object Thinking by Dr. David West
  • Advanced Programming in the UNIX Environment by W. Richard Stevens
  • Hackers and Painters: Big Ideas from the Computer Age
  • The Soul of a New Machine by Tracy Kidder
  • CLR via C# by Jeffrey Richter
  • The Timeless Way of Building by Christopher Alexander
  • Design Patterns in C# by Steve Metsker
  • Alice in Wonderland by Lewis Carol
  • Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
  • About Face – The Essentials of Interaction Design
  • Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
  • The Tao of Programming
  • Computational Beauty of Nature
  • Writing Solid Code by Steve Maguire
  • Philip and Alex’s Guide to Web Publishing
  • Object-Oriented Analysis and Design with Applications by Grady Booch
  • Effective Java by Joshua Bloch
  • Computability by N. J. Cutland
  • Masterminds of Programming
  • The Tao Te Ching
  • The Productive Programmer
  • The Art of Deception by Kevin Mitnick
  • The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
  • Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
  • Masters of Doom
  • Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
  • How To Solve It by George Polya
  • The Alchemist by Paulo Coelho
  • Smalltalk-80: The Language and its Implementation
  • Writing Secure Code (2nd Edition) by Michael Howard
  • Introduction to Functional Programming by Philip Wadler and Richard Bird
  • No Bugs! by David Thielen
  • Rework by Jason Freid and DHH
  • JUnit in Action

Source: Wikipedia

Hidden Features of C#

What are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know?

Here are the revealed features so far:

Keywords

Attributes

Syntax

Language Features

Visual Studio Features

Framework

Methods and Properties

  • String.IsNullOrEmpty() method by KiwiBastard
  • List.ForEach() method by KiwiBastard
  • BeginInvoke()EndInvoke() methods by Will Dean
  • Nullable<T>.HasValue and Nullable<T>.Value properties by Rismo
  • GetValueOrDefault method by John Sheehan

Tips & Tricks

  • Nice method for event handlers by Andreas H.R. Nilsson
  • Uppercase comparisons by John
  • Access anonymous types without reflection by dp
  • A quick way to lazily instantiate collection properties by Will
  • JavaScript-like anonymous inline-functions by roosteronacid

Other

  • netmodules by kokos
  • LINQBridge by Duncan Smart
  • Parallel Extensions by Joel Coehoorn
  • This isn’t C# per se, but I haven’t seen anyone who really uses System.IO.Path.Combine() to the extent that they should. In fact, the whole Path class is really useful, but no one uses it!
  • lambdas and type inference are underrated. Lambdas can have multiple statements and they double as a compatible delegate object automatically (just make sure the signature match) as in:
Console.CancelKeyPress +=
    (sender, e) => {
        Console.WriteLine("CTRL+C detected!\n");
        e.Cancel = true;
    };
  • From Rick Strahl: You can chain the ?? operator so that you can do a bunch of null comparisons.
string result = value1 ?? value2 ?? value3 ?? String.Empty;

When normalizing strings, it is highly recommended that you use ToUpperInvariant instead of ToLowerInvariant because Microsoft has optimized the code for performing uppercase comparisons.

I remember one time my coworker always changed strings to uppercase before comparing. I’ve always wondered why he does that because I feel it’s more “natural” to convert to lowercase first. After reading the book now I know why.

  • My favorite trick is using the null coalesce operator and parentheses to automagically instantiate collections for me.
private IList<Foo> _foo;

public IList<Foo> ListOfFoo 
    { get { return _foo ?? (_foo = new List<Foo>()); } }
  • Here are some interesting hidden C# features, in the form of undocumented C# keywords:
__makeref

__reftype

__refvalue

__arglist

These are undocumented C# keywords (even Visual Studio recognizes them!) that were added to for a more efficient boxing/unboxing prior to generics. They work in coordination with the System.TypedReference struct.


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

There’s also __arglist, which is used for variable length parameter lists.


One thing folks don’t know much about is System.WeakReference — a very useful class that keeps track of an object but still allows the garbage collector to collect it.

The most useful “hidden” feature would be the yield return keyword. It’s not really hidden, but a lot of folks don’t know about it. LINQ is built atop this; it allows for delay-executed queries by generating a state machine under the hood. Raymond Chen recently posted about the internal, gritty details.

  • Using @ for variable names that are keywords.
var @object = new object();
var @string = "";
var @if = IpsoFacto();
  • If you want to exit your program without calling any finally blocks or finalizers use FailFast:
Environment.FailFast()

Read more hidden C# Features at Hidden Features of C#? – Stack Overflow

Hidden Features of python

Source: stackoveflow

What IDE to Use for Python

spreadsheet screenshot

Acronyms used:

 L  - Linux
 W  - Windows
 M  - Mac
 C  - Commercial
 F  - Free
 CF - Commercial with Free limited edition
 ?  - To be confirmed

What is The right JSON content type?

For JSON text:


Build the skills that'll drive your salary into six figures
application/json

Example: { "Name": "Foo", "Id": 1234, "Rank": 7 }

For JSONP (runnable JavaScript) with callback:

application/javascript
Example: functionCall({"Name": "Foo", "Id": 1234, "Rank": 7});

Here are some blog posts that were mentioned in the relevant comments:

IANA has registered the official MIME Type for JSON as application/json.

When asked about why not text/json, Crockford seems to have said JSON is not really JavaScript nor text and also IANA was more likely to hand out application/* than text/*.

More resources:

JSON (JavaScript Object Notation) and JSONP (“JSON with padding”) formats seems to be very similar and therefore it might be very confusing which MIME type they should be using. Even though the formats are similar, there are some subtle differences between them.


So whenever in any doubts, I have a very simple approach (which works perfectly fine in most cases), namely, go and check corresponding RFC document.

JSON RFC 4627 (The application/json Media Type for JavaScript Object Notation (JSON)) is a specifications of JSON format. It says in section 6, that the MIME media type for JSON text is

application/json.

JSONP JSONP (“JSON with padding”) is handled different way than JSON, in a browser. JSONP is treated as a regular JavaScript script and therefore it should use application/javascript, the current official MIME type for JavaScript. In many cases, however, text/javascript MIME type will work fine too.


Note that text/javascript has been marked as obsolete by RFC 4329 (Scripting Media Types) document and it is recommended to use application/javascript type instead. However, due to legacy reasons, text/javascript is still widely used and it has cross-browser support (which is not always a case with application/javascript MIME type, especially with older browsers).

What are some mistakes to avoid while learning programming?

  1. Over use of the GOTO statement. Most schools teach this is a NO;NO
  2. Not commenting your code with proper documentation – what exactly does the code do??
  3. Endless LOOP. A structured loop that has NO EXIT point
  4. Overwriting memory – destroying data and/or code. Especially with Dynamic Allocation;Stacks;Queues
  5. Not following discipline – Requirements, Design, Code, Test, Implementation

Moreover complex code should have a BLUEPRINT – Design. That is like saying let’s build a house without a floor plan. Code/Programs that have a requirements and design specification BEFORE writing code tends to have a LOWER error rate. Less time debugging and fixing errors. Source: QUora

Lisp.

The thing that always struck me is that the best programmers I would meet or read all had a couple of things in common.

  1. They didn’t use IDEs, preferring Emacs or Vim.
  2. They all learned or used Functional Programming (Lisp, Haskel, Ocaml)
  3. They all wrote or endorsed some kind of testing, even if it’s just minimal TDD.
  4. They avoided fads and dependencies like a plague.

It is a basic truth that learning Lisp, or any functional programming, will fundamentally change the way you program and think about programming. Source: Quora

What are the Top 20  lesser known but cool data structures?

1- Tries, also known as prefix-trees or crit-bit trees, have existed for over 40 years but are still relatively unknown. A very cool use of tries is described in “TRASH – A dynamic LC-trie and hash data structure“, which combines a trie with a hash function.

2- Bloom filter: Bit array of m bits, initially all set to 0.

To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.

To check if an item is in the set, compute the k indices and check if they are all set to 1.

Of course, this gives some probability of false-positives (according to wikipedia it’s about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.


Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.

3- Rope: It’s a string that allows for cheap prepends, substrings, middle insertions and appends. I’ve really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.

4- Skip lists are pretty neat.

Wikipedia
A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).

They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.

If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT’s Introduction to Algorithms lecture on them.

Also, here is a Java applet demonstrating Skip Lists visually.

5Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.

Bit Arrays store individual bits compactly and allow fast bit operations.

6-Zippers – derivatives of data structures that modify the structure to have a natural notion of ‘cursor’ — current location. These are really useful as they guarantee indicies cannot be out of bound — used, e.g. in the xmonad window manager to track which window has focused.

Amazingly, you can derive them by applying techniques from calculus to the type of the original data structure!


7- Suffix tries. Useful for almost all kinds of string searching (http://en.wikipedia.org/wiki/Suffix_trie#Functionality). See also suffix arrays; they’re not quite as fast as suffix trees, but a whole lot smaller.

8- Splay trees (as mentioned above). The reason they are cool is threefold:

    • They are small: you only need the left and right pointers like you do in any binary tree (no node-color or size information needs to be stored)
    • They are (comparatively) very easy to implement
    • They offer optimal amortized complexity for a whole host of “measurement criteria” (log n lookup time being the one everybody knows). See http://en.wikipedia.org/wiki/Splay_tree#Performance_theorems

9- Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it’s a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it’s not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.

10- A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here’s how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v’s neighbor list and for v in u’s neighbor list. Both have size at most 6, so this is O(1).

By the above algorithm, if u and v are neighbors, you won’t have both u in v’s list and v in u’s list. If you need this, just add each node’s missing neighbors to that node’s neighbor list, but store how much of the neighbor list you need to look through for fast lookup.


11-Lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked.
They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.

Here’s some links
http://www.cl.cam.ac.uk/research/srg/netos/lock-free/
http://www.research.ibm.com/people/m/michael/podc-1996.pdf [Links to PDF]
http://www.boyet.com/Articles/LockfreeStack.html

Mike Acton’s (often provocative) blog has some excellent articles on lock-free design and approaches

12- I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan’s Function, if I recall my data structures class correctly).

13- Fibonacci heaps

They’re used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra’s algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.

14- Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it’s the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.

Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.

In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.

Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.

15- Huffman trees – used for compression.

16- Have a look at Finger Trees, especially if you’re a fan of the previously mentioned purely functional data structures. They’re a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.

As per the original article:

Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.

A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.

17- Circular or ring buffer– used for streaming, among other things.

18- I’m surprised no one has mentioned Merkle trees (ie. Hash Trees).

Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.

19- <zvrba> Van Emde-Boas trees

I think it’d be useful to know why they’re cool. In general, the question “why” is the most important to ask 😉

My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.

20- An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.

Honorable mentions: splay trees, Cuckoo Hashing, min-max heap,  Cache Oblivious datastructures, Left Leaning Red-Black Trees, Work Stealing Queue, Bootstrapped skew-binomial heaps , Kd-Trees, MX-CIF Quadtrees, HAMT, Inverted Index, Fenwick Tree, Ball Tress, Van Emde-Boas trees. Nested sets , half-edge data structure , Scapegoat trees, unrolled linked list, 2-3 Finger Trees, Pairing heaps , Interval Trees, XOR Linked List, Binary decision diagram, The Region Quadtree, treaps, Counted unsorted balanced btrees, Arne Andersson trees , DAWGs , BK-Trees, or Burkhard-Keller TreesZobrist Hashing, Persistent Data Structures, B* tree, Deletable Bloom Filters (DlBF)

Ring-Buffer, Skip lists, Priority deque, Ternary Search Tree, FM-index, PQ-Trees, sparse matrix data structures, Delta list/delta queue, Bucket Brigade, Burrows–Wheeler transform , corner-stitched data structure. Disjoint Set Forests, Binomial heap, Cycle Sort 

What and where are the stack and the heap?

  • Where and what are they (physically in a real computer’s memory)?
  • To what extent are they controlled by the OS or language run-time?
  • What is their scope?
  • What determines the size of each of them?
  • What makes one faster?

The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.

The heap is memory set aside for dynamic allocation. Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.

Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).


To answer your questions directly:

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

What is their scope?

The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.

What determines the size of each of them?

The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.


A clear demonstration: 
Image source: vikashazrati.wordpress.com

Stack:

  • Stored in computer RAM just like the heap.
  • Variables created on the stack will go out of scope and are automatically deallocated.
  • Much faster to allocate in comparison to variables on the heap.
  • Implemented with an actual stack data structure.
  • Stores local data, return addresses, used for parameter passing.
  • Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
  • Data created on the stack can be used without pointers.
  • You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
  • Usually has a maximum size already determined when your program starts.

Heap:

  • Stored in computer RAM just like the stack.
  • In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with deletedelete[], or free.
  • Slower to allocate in comparison to variables on the stack.
  • Used on demand to allocate a block of data for use by the program.
  • Can have fragmentation when there are a lot of allocations and deallocations.
  • In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
  • Can have allocation failures if too big of a buffer is requested to be allocated.
  • You would use the heap if you don’t know exactly how much data you will need at run time or if you need to allocate a lot of data.
  • Responsible for memory leaks.

Example:

int foo()
{
  char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
  bool b = true; // Allocated on the stack.
  if(b)
  {
    //Create 500 bytes on the stack
    char buffer[500];

    //Create 500 bytes on the heap
    pBuffer = new char[500];

   }//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;

he most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.

  • In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).

    Stack like a stack of papers

    The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.

  • In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear ‘top’ item.

    Heap like a heap of licorice allsorts

    Heap allocation requires maintaining a full record of what memory is allocated and what isn’t, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting – identifying at runtime when memory is no longer in scope and deallocating it.

These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!

  • To what extent are they controlled by the OS or language runtime?

    As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn’t really have runtime control over it; it’s determined by the programming language, OS and even the system architecture.

    A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.

  • What is their scope?

    The call stack is such a low level concept that it doesn’t relate to ‘scope’ in the sense of programming. If you disassemble some code you’ll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you’d expect it to work given how your programming languages work. In a heap, it’s also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a “scope” is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).

  • What determines the size of each of them?

    Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don’t store huge chunks of data on the stack, so it’ll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, “stack overflow”) or other unusual programming decisions.

    A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don’t normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn’t use memory that you haven’t allocated yet or memory that you have freed.

  • What makes one faster?

    The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What’s more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.

  • Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
  • In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.

The heap

  • The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block.
  • As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
  • Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
  • When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.

The heap

The stack

  • The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
  • The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don’t be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
  • If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
  • When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
  • When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
  • Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
  • As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.

The stack

Can a function be allocated on the heap instead of a stack?

No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.

How the heap is managed is really up to the runtime environment. C uses malloc and C++ uses new, but many other languages have garbage collection.

However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn’t too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.

In the following C# code

public void Method1()
{
    int i = 4;
    int y = 2;
    class1 cls1 = new class1();
}

Here’s how the memory is managed

Picture of variables on the stack

Local Variables that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don’t really know up front but we expect them to last a while. In most languages it’s critical that we know at compile time how large a variable is if we want to store it on the stack.

Objects (which vary in size as we update them) go on the heap because we don’t know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.

In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you’re not dealing with pointers.

More information can be found here:

The difference between stack and heap memory allocation « timmurphy.org

and here:

Creating Objects on the Stack and Heap

This article is the source of picture above: Six important .NET concepts: Stack, heap, value types, reference types, boxing, and unboxing – CodeProject

but be aware it may contain some inaccuracies.

The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.

Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).

The Heap The heap is a generic name for where you put the data that you create on the fly. If you don’t know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.

Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are – memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).

Implementation Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.

This is only practical if your memory usage is quite different from the norm – i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.

Physical location in memory This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implimentation specific) and frankly not important.

In Short

A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer’s RAM.


In Detail

The Stack

The stack is a “LIFO” (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is “pushed” onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.

The advantage of using the stack to store variables, is that memory is managed for you. You don’t have to allocate memory by hand, or free it once you don’t need it any more. What’s more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.

More can be found here.


The Heap

The heap is a region of your computer’s memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don’t need it any more.

If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won’t be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.

Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.

Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.

More can be found here.


Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.

Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.

Enter image description here

You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.

In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.

Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).

Enter image description here

At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.

Even, more detail is given here and here.


Now come to your question’s answers.

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

More can be found here.

What is their scope?

Already given in top.

“You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.”

More can be found in here.

What determines the size of each of them?

The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.

Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.

Details can be found from here.

How do you stop scripters from slamming your website hundreds of times a second?

How about implementing something like SO does with the CAPTCHAs?

If you’re using the site normally, you’ll probably never see one. If you happen to reload the same page too often, post successive comments too quickly, or something else that triggers an alarm, make them prove they’re human. In your case, this would probably be constant reloads of the same page, following every link on a page quickly, or filling in an order form too fast to be human.

If they fail the check x times in a row (say, 2 or 3), give that IP a timeout or other such measure. Then at the end of the timeout, dump them back to the check again.


Since you have unregistered users accessing the site, you do have only IPs to go on. You can issue sessions to each browser and track that way if you wish. And, of course, throw up a human-check if too many sessions are being (re-)created in succession (in case a bot keeps deleting the cookie).

As far as catching too many innocents, you can put up a disclaimer on the human-check page: “This page may also appear if too many anonymous users are viewing our site from the same location. We encourage you to register or login to avoid this.” (Adjust the wording appropriately.)

Besides, what are the odds that X people are loading the same page(s) at the same time from one IP? If they’re high, maybe you need a different trigger mechanism for your bot alarm.


Edit: Another option is if they fail too many times, and you’re confident about the product’s demand, to block them and make them personally CALL you to remove the block.

Having people call does seem like an asinine measure, but it makes sure there’s a human somewhere behind the computer. The key is to have the block only be in place for a condition which should almost never happen unless it’s a bot (e.g. fail the check multiple times in a row). Then it FORCES human interaction – to pick up the phone.

In response to the comment of having them call me, there’s obviously that tradeoff here. Are you worried enough about ensuring your users are human to accept a couple phone calls when they go on sale? If I were so concerned about a product getting to human users, I’d have to make this decision, perhaps sacrificing a (small) bit of my time in the process.

Since it seems like you’re determined to not let bots get the upper hand/slam your site, I believe the phone may be a good option. Since I don’t make a profit off your product, I have no interest in receiving these calls. Were you to share some of that profit, however, I may become interested. As this is your product, you have to decide how much you care and implement accordingly.


The other ways of releasing the block just aren’t as effective: a timeout (but they’d get to slam your site again after, rinse-repeat), a long timeout (if it was really a human trying to buy your product, they’d be SOL and punished for failing the check), email (easily done by bots), fax (same), or snail mail (takes too long).

You could, of course, instead have the timeout period increase per IP for each time they get a timeout. Just make sure you’re not punishing true humans inadvertently.

 

Performance optimization strategies as a last resort

Let’s assume:

  • the code already is working correctly
  • the algorithms chosen are already optimal for the circumstances of the problem
  • the code has been measured, and the offending routines have been isolated
  • all attempts to optimize will also be measured to ensure they do not make matters worse

OK, you’re defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:

  • The first problem found was use of list clusters (now called “iterators” and “container classes”) accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds.

  • Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds.

  • Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.

Now I seem to have hit a wall. The samples are telling me exactly what it is doing, but I can’t seem to find anything that I can improve. Then I reflect on the basic design of the program, on its transaction-driven structure, and ask if all the list-searching that it is doing is actually mandated by the requirements of the problem.

Then I hit upon a re-design, where the program code is actually generated (via preprocessor macros) from a smaller set of source, and in which the program is not constantly figuring out things that the programmer knows are fairly predictable. In other words, don’t “interpret” the sequence of things to do, “compile” it.

  • That redesign is done, shrinking the source code by a factor of 4, and the time is reduced to 10 seconds.

Now, because it’s getting so quick, it’s hard to sample, so I give it 10 times as much work to do, but the following times are based on the original workload.

  • More diagnosis reveals that it is spending time in queue-management. In-lining these reduces the time to 7 seconds.

  • Now a big time-taker is the diagnostic printing I had been doing. Flush that – 4 seconds.

  • Now the biggest time-takers are calls to malloc and free. Recycle objects – 2.6 seconds.

  • Continuing to sample, I still find operations that are not strictly necessary – 1.1 seconds.

Total speedup factor: 43.6

Now no two programs are alike, but in non-toy software I’ve always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i or i++ or for(;;) or while(1) are faster: the kinds of questions I see so often on Stack Overflow.

P.S. It may be wondered why I didn’t use a profiler. The answer is that almost every one of these “problems” was a function call site, which stack samples pinpoint. Profilers, even today, are just barely coming around to the idea that statements and call instructions are more important to locate, and easier to fix, than whole functions.

I actually built a profiler to do this, but for a real down-and-dirty intimacy with what the code is doing, there’s no substitute for getting your fingers right in it. It is not an issue that the number of samples is small, because none of the problems being found are so tiny that they are easily missed.

ADDED: jerryjvl requested some examples. Here is the first problem. It consists of a small number of separate lines of code, together taking over half the time:

 /* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)

These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with “information hiding” meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a “bottleneck” (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they “should have been avoided” (in hindsight). I hope that gives a bit of the flavor.

Here is the second problem, in two separate lines:

 /* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)

These are building lists by appending items to their ends. (The fix was to collect the items in arrays, and build the lists all at once.) The interesting thing is that these statements only cost (i.e. were on the call stack) 3/48 of the original time, so they were not in fact a big problem at the beginning. However, after removing the first problem, they cost 3/20 of the time and so were now a “bigger fish”. In general, that’s how it goes.

I might add that this project was distilled from a real project I helped on. In that project, the performance problems were far more dramatic (as were the speedups), such as calling a database-access routine within an inner loop to see if a task was finished.

REFERENCE ADDED: The source code, both original and redesigned, can be found in www.ddj.com, for 1993, in file 9311.zip, files slug.asc and slug.zip.

EDIT 2011/11/26: There is now a SourceForge project containing source code in Visual C++ and a blow-by-blow description of how it was tuned. It only goes through the first half of the scenario described above, and it doesn’t follow exactly the same sequence, but still gets a 2-3 order of magnitude speedup.

Suggestions:

  • Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead.
    Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory.
  • Don’t use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it.
    Down-sides: writing additional code means more surface area for bugs.
  • Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!)
  • Cheat: in some cases although an exact calculation may exist for your problem, you may not need ‘exact’, sometimes an approximation may be ‘good enough’ and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%?
    Down-sides: Well… the answer won’t be exact.

When you can’t improve the performance any more – see if you can improve the perceived performance instead.

You may not be able to make your fooCalc algorithm faster, but often there are ways to make your application seem more responsive to the user.

A few examples:

  • anticipating what the user is going to request and start working on that before then
  • displaying results as they come in, instead of all at once at the end
  • Accurate progress meter

These won’t make your program faster, but it might make your users happier with the speed you have.

I spend most of my life in just this place. The broad strokes are to run your profiler and get it to record:

  • Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
  • Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing.
  • Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
  • Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
  • Sequential floating-point ops. Make these SIMD.

And one more thing I like to do:

  • Set your compiler to output assembly listings and look at what it emits for the hotspot functions in your code. All those clever optimizations that “a good compiler should be able to do for you automatically”? Chances are your actual compiler doesn’t do them. I’ve seen GCC emit truly WTF code.

More suggestions:

  • Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need.

  • Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking).

  • Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done.

  • Threaded I/O: For those daring enough, combine ‘I/O up-front’ or ‘Delay I/O’ with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.

I love all the

  1. graph algorithms in particular the Bellman Ford Algorithm
  2. Scheduling algorithms the Round-Robin scheduling algorithm in particular.
  3. Dynamic Programming algorithms the Knapsack fractional algorithm in particular.
  4. Backtracking algorithms the 8-Queens algorithm in particular.
  5. Greedy algorithms the Knapsack 0/1 algorithm in particular.

We use all these algorithms in our daily life in various forms at various places.

For example every shopkeeper applies anyone or more of the several scheduling algorithms to service his customers. Depending upon his service policy and situation. No one of the scheduling algorithm fits all the situations.

All of us mentally apply one of the graph algorithms when we plan the shortest route to be taken when we go out for doing multiple things in one trip.

All of us apply one of the Greedy algorithms while selecting career, job, girlfriends, friends etc.

All of us apply one of the Dynamic programming algorithms when we do simple multiplication mentally by referring to the various mathematical products table in our memory.

How much faster is C compared to Python?

Top 7 Most Popular Programming Languages (Most Used High Level List)

It uses TimSort, a sort algorithm which was invented by Tim Peters, and is now used in other languages such as Java.

TimSort is a complex algorithm which uses the best of many other algorithms, and has the advantage of being stable – in others words if two elements A & B are in the order A then B before the sort algorithm and those elements test equal during the sort, then the algorithm Guarantees that the result will maintain that A then B ordering.

That does mean for example if you want to say order a set of student scores by score and then name (so equal scores are ordered already alphabetically) then you can sort by name and then sort by score.

TimSort has good performance against data sets which are partially sorted or already sorted (areas where some other algorithms struggle).

 
 
Timsort – Wikipedia
Timsort was designed to take advantage of runs of consecutive ordered elements that already exist in most real-world data, natural runs . It iterates over the data collecting elements into runs and simultaneously putting those runs in a stack. Whenever the runs on the top of the stack match a merge criterion , they are merged. This goes on until all data is traversed; then, all runs are merged two at a time and only one sorted run remains. 

Run Your Python Code Online Here



I’m currently coding a SAT solver algorithm that will have to take millions of input data, and I was wondering if I should switch from Python to C.

Answer: Using best-of-class equivalent algorithms optimized compiled C code is often multiple orders of magnitude faster than Python code interpreted by CPython (the main Python implementation). Other Python implementations (like PyPy) might be a bit better, but not vastly so. Some computations fit Python better, but I have a feeling that a SAT solver implementation will not be competitive if written using Python.

All that said, do you need to write a new implementation? Could you use one of the excellent ones out there? CDCL implementations often do a good job, and there are various open-source ones readily available (e.g., this one: https://github.com/togatoga/togasat

Comments:

1- I mean, also it depends. I recall seeing an analysis some time ago, that showed CPython can be as fast as C … provided you are almost exclusively using library functions written in C. That being said, for any non-trivial python program it will probably be the case that you must spend quite a bit of time in the interpreter, and not in C library functions.

Why Are There So Many Programming Languages? | Juniors Coders
Popular programming languages

The other answers are mistaken. This is a very common confusion. They describe statically typed language, not strongly typed language. There is a big difference.

Strongly typed vs weakly typed:

In strongly typed languages you get an error if the types do not match in an expression. It does not matter if the type is determined at compile time (static types) or runtime (dynamic types).

Both java and python are strongly typed. In both languages, you get an error if you try to add objects with unmatching types. For example, in python, you get an error if you try to add a number and a string:

  • >>> a = 10 
  • >>> b = “hello” 
  • >>> a + b 
  • Traceback (most recent call last): 
  • File “<stdin>”, line 1, in <module> 
  • TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’ 

In Python, you get this error at runtime. In Java, you would get a similar error at compile time. Most statically typed languages are also strongly typed.

The opposite of strongly typed language is weakly typed. In a weakly typed language, there are implicit type conversions. Instead of giving you an error, it will convert one of the values automatically and produce a result, even if such conversion loses data. This often leads to unexpected and unpredictable behavior.

Javascript is an example of a weakly typed language.

  • > let a = 10 
  • > let b = “hello” 
  • > a + b 
  • ’10hello’ 

Instead of an error, JavaScript will convert a to string and then concatenate the strings.

Static types vs dynamic types:

In a statically typed language, variables are bound types and may only hold data of that type. Typically you declare variables and specify the type of data that the variable has. In some languages, the type can be deduced from what you assign to it, but it still holds that the variable is bound to that type. For example, in java:

  • int a = 3; 
  • a = “hello” // Error, a can only contain integers 

in a dynamically typed language, variables may hold any type of data. The type of the data is simply determined by what gets assigned to the variable at runtime. Python is dynamically typed, for example:

  • a = 10 
  • a = “hello” 
  • # no problem, a first held an integer and then a string 

Comments:

#1: Don’t confuse strongly typed with statically typed.

Python is dynamically typed and strongly typed.
Javascript is dynamically typed and weakly typed.
Java is statically typed and strongly typed.
C is statically typed and weekly typed.

See these articles for a longer explanation:
Magic lies here – Statically vs Dynamically Typed Languages
Key differences between mainly used languages for data science

I also added a drawing that illustrates how strong and static typing relate to each other:

Python is dynamically typed because types are determined at runtime. The opposite of dynamically typed is statically typed (not strongly typed)

Python is strongly typed because it will give errors when types don’t match instead of performing implicit conversion. The opposite of strongly typed is weakly typed

Python is strongly typed and dynamically typed

What is the difference between finalize() and destructor in Java?

Finalize() is not guaranteed to be called and the programmer has no control over what time or in what order finalizers are called.

They are useless and should be ignored.

A destructor is not part of Java. It is a C++ language feature with very precise definitions of when it will be called.

Comments:

1- Until we got to languages like Rust (with the Drop trait) and a few others was C++ the only language which had the destructor as a concept? I feel like other languages were inspired from that.

2- Many others manage memory for you, even predating C: COBOL, FORTRAN and so on. That’s another driver why there isn’t much attention to destructors

What are some ways to avoid writing static helper classes in Java?

Mainly getting out of that procedural ‘function operates on parameters passed in’ mindset.

Tactically, the static can normally be moved onto one of the parameter objects. Or all the parameters become an object that the static moves to. A new object might be needed. Once done the static is now a fully fledged method on an object and is not static anymore.

I view this as a positive iterative step in discovering objects for a system.

For cases where a static makes sense (? none come to mind) then a good practice is to move it closer to where it is used either in the same package or on a class that is strongly related.

I avoid having global ‘Utils’ classes full of statics that are unrelated. That’s fairly basic design, keeping unrelated things separate. In this case, the SOLID ISP principle applies: segregate into smaller, more focused interfaces.

Is there any programming language as easy as python and as fast and efficient as C++, if yes why it’s not used very often instead of C or C++ in low level programming like embedded systems, AAA 2D and 3D video games, or robotic?

Not really. I use Python occasionally for “quick hacks” – programs that I’ll probably run once and then delete – also, because I use “blender” for 3D modeling and Python is it’s scripting language.

I used to write quite a bit of JavaScript for web programming but since WASM came along and allows me to run C++ at very nearly full speed inside a web browser, I write almost zero JavaScript these days.

I use C++ for almost everything.

Once you get to know C++ it’s no harder than Python – the main thing I find great about Python is the number of easy-to-find libraries.

But in AAA games – the poor performance of Python pretty much rules it out.

In embedded systems, the computer is generally too small to fit a Python interpreter into memory – so C or C++ is a more likely choice.

This was actually one of the interview questions I got when I applied at Google.

“Write a function that returns the average of two number.”

So I did, they way you would expect. (x+y)/2. I did it as a C++ template so it works for any kind of number.

interviewer: “What’s wrong with it?”

Well, I suppose there could be an overflow if adding the two numbers requires more than space than the numeric type can hold. So I rewrote it as (x/2) + (y/2).

interviewer: “What’s wrong with it now?”

Well, I think we are losing a little precision by pre-dividing. So I wrote it another way.

interviewer: “What’s wrong with it now?”

And that went on for about 10 minutes. It ended with us talking about the heat death of the universe.

I got the job and ended up working with the guy. He said he had never done that before. He had just wanted to see what would happen.

Comments:

1-

The big problem you get with x/2 + y/2 is that it can/will give incorrect answers for integer inputs. For example, let’s average 3 and 3. The result should obviously be 3.

But with integer division, 3/2 = 1, and 1+1 = 2.

You need to add one to the result if and only if both inputs are odd.

2- Here’s what I’d do in C++ for integers, which I believe does the right thing including getting the rounding direction correct, and it can likely be made into a template that will do the right thing as well. This is not complete code, but I believe it gets the details correct…

Programming - Find the average of 2 numbers
Programming – Find the average of 2 numbers

That will work for any signed or unsigned integer type for op1 and op2 as long as they have the same type.

If you want it to do something intelligently where one of the operands is an unsigned type and the other one is a signed type, you could do it, but you need to define exactly what should happen, and realize that it’s quite likely that for maximum arithmetic correctness, the output type may need to be different than either input type. For instance, the average of a uint32_t and an int32_t can be too large to fit in an int32_t, and it can also be too small to fit in a uint32_t, so you probably need to go with a larger signed integer type, maybe int64_t.

3- I would have answered the question with a question, “Tell me more about the input, error handling capability of your system, and is this typical of the level of challenge here at google?” Then I’d provide eye contact, sit back, and see what happens. Years ago I had an interview question that asked what classical problem was part of a pen plotter control system. I told the interviewer that it was TSP but that if you had to change pens, you had to consider how much time it took to switch. They offered me a job but I declined given the poor financial condition of the company (SGI) which I discovered by asking the interviewer questions of my own. IMO: questions are at the heart of engineering. The interviewer, if they are smart, wants to see if you are capable of discovering the true nature of their problems. The best programmers I’ve ever worked with were able to get to the heart of problems and trade off solutions. Coding is a small part of the required skills.

It depends on how you want to store and access data.

For the most part, as a general concept, old school cryptography is obsolete.

It was based on ciphers, which were based on it being mathematically “hard” to crack.

If you can throw a compute cluster at DES, even with a one byte “salt”, it’s pretty easy to crack a password database in seconds. Minutes, if your cluster is small.

Almost all computer security is base on big number theory. Today, that’s called:

 
 
Law of large numbers – Wikipedia
Averages of repeated trials converge to the expected value An illustration of the law of large numbers using a particular run of rolls of a single die . As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. Although each run would show a distinctive shape over a small number of throws (at the left), over a large number of rolls (to the right) the shapes would be extremely similar. In probability theory , the law of large numbers ( LLN ) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. [1] The LLN is important because it guarantees stable long-term results for the averages of some random events. [1] [2] For example, while a casino may lose money in a single spin of the roulette wheel, its earnings will tend towards a predictable percentage over a large number of spins. Any winning streak by a player will eventually be overcome by the parameters of the game. Importantly, the law applies (as the name indicates) only when a large number of observations is considered. There is no principle that a small number of observations will coincide with the expected value or that a streak of one value will immediately be “balanced” by the others (see the gambler’s fallacy ). It is also important to note that the LLN only applies to the average. Therefore, while lim n → ∞ ∑ i = 1 n X i n = X ¯ {\displaystyle \lim _{n\to \infty }\sum _{i=1}^{n}{\frac {X_{i}}{n}}={\overline {X}}} other formulas that look similar are not verified, such as the raw deviation from “theoretical results”: ∑ i = 1 n X i − n × X ¯ {\displaystyle \sum _{i=1}^{n}X_{i}-n\times {\overline {X}}} not only does it not converge toward zero as n increases, but it tends to increase in absolute value as n increases. Examples [ edit ] For example, a single roll of a fair, six-sided die produces one of the numbers 1, 2, 3, 4, 5, or 6, each with equal probability . Therefore, the expected value of the average of the rolls is: 1 + 2 + 3 + 4 + 5 + 6 6 = 3.5 {\displaystyle {\frac {1+2+3+4+5+6}{6}}=3.5} According to the law of large numbers, if a large number of six-sided dice are rolled, the average of their values (sometimes called the sample mean ) is likely to be close to 3.5, with the precision increasing as more dice are rolled. It follows from the law of large numbers that the empirical probability of success in a series of Bernoulli trials will converge to the theoretical probability. For a Bernoulli random variable , the expected value is the theoretical probability of success, and the average of n such variables (assuming they are independent and identically distributed (i.i.d.) ) is precisely the relative frequency. For example, a fair coin toss is a Bernoulli trial. When a fair coin is flip
 

What it means is that it’s hard to do math on very large numbers, and so if you have a large one, the larger the better.

Most cryptography today is based on elliptic curves.

But we know by the proof of Fermat’s last theorem, and specifically, the Taniyama-Shimura conjecture, is that all elliptic curves have modular forms.

And so this gives us an attack at all modern cryptogrphay, using graphical mathematics.

It’s an interesting field, and problem space.

Not one I’m interested in solving, since I’m sure it has already been solved by my “associates” who now work for the NSA.

I am only interested in new problems.

Comments:

1- Sorry, but this is just wrong. “Almost all cryptography,” counted by number of bytes encrypted and decrypted, uses AES. AES does not use “large numbers,” elliptic curves, or anything of that sort – it’s essentially combinatorial in nature, with a lot of bit-diddling – though there is some group theory at its based. The same can be said about cryptographic checksums such as the SHA series, including the latest “sponge” constructions.

Where RSA and elliptic curves and such come in is public key cryptography. This is important in setting up connections, but for multiple reasons (performance – but also for excellent cryptographic reasons) is not use for bulk encryption. There are related algorithms like Diffie-Hellman and some signature protocols like DSS. All of these “use large numbers” in some sense, but even that’s pushing it – elliptic curve cryptography involves doing math over … points on an elliptic curve, which does lead you to do some arithmetic, but the big advantage of elliptic curves is that the numbers are way, way smaller than for, say, RSA for equivalent security.

Much research these days is on “post-quantum cryptography” – cryptography that is secure against attacks by quantum computers (assuming we ever make those work). These tend not to be based on “arithmetic” in any straightforward sense – the ones that seem to be at the forefront these days are based on computation over lattices.

Cracking a password database that uses DES is so far away from what cryptography today is about that it’s not even related. Yes, the original Unix implementations – almost 50 years ago – used that approach. So?

C++ lambda functions are syntactic sugar for a longstanding set of practices in both C and C++: passing a function as an argument to another function, and possibly connecting a little bit of state to it.

This goes way back. Look at C’s qsort():

C++ Function example

That last argument is a function pointer to a comparison function. You could use a captureless lambda for the same purpose in modern C++.

Sometimes, you want to tack a little bit of extra state alongside the function. In C, one way to do this is to provide an additional context pointer alongside the the function pointer. The context pointer will get passed back to the function as an argument.

I give an extended example in here:

In C++, that context pointer can be this. When you do that, you have something called a function object. (Side note: function objects were sometimes called functors; however, functors aren’t really the same thing.)

If you overload the function call operator for a particular class, then objects of that class behave as function objects. That is, you can pretend like the object is a function by putting parentheses and an argument list after the name of an instance! When you arrive at the overloaded operator implementation, this will point at the instance.

Instances of this class will add an offset to an integer. The function call operator is operator() below.

and to use it:

C++ Class Offset

That’ll print out the numbers 42, 43, 44, … 51 on separate lines.

And tying this back to the qsort() example from earlier: C++’s std::sort can take a function object for its comparison operator.

Modern C++’s lambda functions are syntactic sugar for function objects. They declare a class with an unutterable name, and then give you an instance of that class. Under the hood, the class’ constructor implements the capture, and initializes any state variables.

Other languages have similar constructs. I believe this one originated in LISP. It goes waaaay back.

As for any challenges associated with them: lifetime management. You potentially introduce a non-nested lifetime for any state associated with the callback, function object, or lambda.

If it’s all self contained (i.e. it keeps its own copies of everything), you’re less likely to have a problem. It owns all the state it relies on.

If it has non-owning pointers or references to other objects, you need to ensure the lifetime of your callback/function object/lambda remains within the lifetime of that other non-owned object. If that non-owned object’s lifetime isn’t naturally a superset of the callback/function object/lambda, you should consider taking a copy of that object, or reconsider your design.

Each one has specific strengths in terms of syntax features.

But the way to look at this is that all three are general purpose programming languages. You can write pretty much anything in them.

Trying to rank these languages in some kind of absolute hierarchy makes no sense and only leads to tribal ‘fanboi’ arguments.

If you need part of your code to talk to hardware, or could benefit from taking control of memory management, C++ is my choice.

General web service stuff, Java has an edge due to familiarity.

Anything involving a pre existing Microsoft component – eg data in SQL server, Azure – I will go all in on C#

I see more similarity than difference overall

Visual Studio Code is OK if you can’t find anything better for the language you’re using. There are better alternatives for most popular languages.

C# – Use Visual Studio Community, it’s free, and far better than Visual Studio Code.

Java – Use IntelliJ

Go – Goland.

Python – PyCharm.

C or C++ – CLion.

If you’re using a more unusual language, maybe Rust, Visual Studio Code might be a good choice.

Comments:

#1: Just chipping in here. I used to be a massive visual studio fan boy and loved my fancy gui for doing things without knowing what was actually happening. I’ve been using vscode and Linux for a few years now and am really enjoying the bare metal exposure you get with working on it (and linux) typing commands is way faster to get things done than mouse clicking through a bunch of guis. Both are good though.

#2:  C# is unusual in that it’s the only language which doesn’t follow the maxim, “if JetBrains have blessed your language with attention, use their IDE”.

Visual Studio really is first class.

#3: for Rust as long as you have rust-analyzer and clippy, you’re good to go. Vim with lua and VS Code both work perfectly.

#4: This is definitely skirting the realm of opinion. It’s a great piece of software. There is better and worse stuff but it all depends upon the person using it, their skill, and style of development.

#5: VSCode is excellent for coding. I’ve been using it for about 6 years now, mainly for Python work, but also developing JS based mobile apps. I mainly use Visual Studio, but VSC’s slightly stripped back nature has been embellished with plenty of updates and more GUI discovery methods, plus that huge extensions library (I’ve worked with the creation of an intellisense style plugin as well).

I’m personally a fan of keeping it simple on IDEs, and I work in a lot of languages. I’m not installing 6 or 7 IDEs because they apparently have advantages in that specific language, so I’d rather install one IDE which can do a credible job on all of them.

I’m more a fan of developing software than getting anally retentive about knowing all the keyboard shortcuts to format a source file. Life’s too short for that. Way too short!

To each their own. Enjoy whatever you use!

Dmitry Aliev is correct that this was introduced into the language before references.

I’ll take this question as an excuse to add a bit more color to this.

C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this pointer parameter to member functions. E.g.:

Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

was translated to something like:

  • int f__1S(S *this); 

(the funny name f__1S is just an example of a possible “mangling” of the name of S::f, which allows traditional linkers to deal with the richer naming environment of C++).

What might comes as a surprise to the modern C++ programmer is that in that model this is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this remained valid in constructors and destructors even though this had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.

When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

In other words, the expression this is now effectively a kind of alias for &__this, where __this is just a name I made up for an unnamable implicit reference parameter.

C++11 further tweaked this by introducing syntax to control the kind of reference that this is bound from. E.g.,

struct S

That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this:

C++_pointer_and_not_reference5b

 
 

After that language feature was released, we started getting many reports of buggy programs that “captured” this thinking they captured the class value, when instead they really wanted to capture __this (or *this). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:

  • we introduced the ability to capture *this
  • we allowed [=, this] since now [this] is really a “by reference” capture of *this
  • even though [this] was now a “by reference” capture, we left in the ability to write [&, this], despite it being redundant (compatibility with earlier standards)

Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):

Why is C++ "this" a pointer and not a reference?

In that example, the “object parameter” (i.e., the previously hidden reference parameter __this) is now an explicit parameter and it is no longer a reference!

Here is another example (also from the paper):

 

Why is C++ "this" a pointer and not a reference?

Here:

  • the type of the object parameter is a deducible template-dependent type
  • the deduction actually allows a derived type to be found

This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.

It adds some extra steps in design, testing and deployment for sure. But it can buy you an easier path to scalability and an easier path to fault tolerance and live system upgrades.

It’s not REST itself that enables that. But if you use REST you will have split your code up into independently deployable chunks called services.

So more development work to do, yes, but you get something a single monolith can’t provide. If you need that, then the REST service approach is a quick way to doing it.

We must compare like for like in terms of results for questions like this.

Because at the time, there was likely no need.

Based on what I could find, the strtok library function appeared in System III UNIX some time in 1980.

In 1980, memory was small, and programs were single threaded. I don’t know whether UNIX had any support for multiple processors, even. I think that happened a few years later.

Its implementation was quite simple.

Why didn't the C library designers make strtok() explicitly store the state to allow working on multiple strings at the same time?

 

This was 3 years before they started the standardization process, and 9 years before it was standardized in ANSI C.

This was simple and good enough, and that’s what mattered most. It’s far from the only library function with internal state.

And Lex/YACC took over more complex scanning and parsing tasks, so it probably didn’t get a lot of attention for the lightweight uses it was put to.

For a tongue-in-cheek take on how UNIX and C were developed, read this classic:

 
The Rise of “Worse is Better” By Richard Gabriel I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase “the right thing.” To such a designer it is important to get all of the following characteristics right: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation. · Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed. · Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness. I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the “MIT approach.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation. The worse-is-better philosophy is only slightly different: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design. · Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct. · Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface. Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the “New Jersey approach.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach. Let me start out by retelling a story that shows that the MIT/New-Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better.
 
 

Because the ‘under the hood’ code is about 50 years old. I’m not kidding. I worked on some video poker machines that were made in the early 1970’s.

Here’s how they work.

You have an array of ‘cards’ from 0 to 51. Pick one at random. Slap it in position 1 and take it out of your array. Do the same for the next card … see how this works?

Video poker machines are really that simple. They literally simulate a deck of cards.

Anything else, at least in Nevada, is illegal. Let me rephrase that, it is ILLEGAL, in all caps.

If you were to try to make a video poker game (or video keno, or slot machine) in any other way than as close to truly random selection from an ‘array’ of options as you can get, Nevada Gaming will come after you so hard and fast, your third cousin twice removed will have their ears ring for a week.

That is if the Families don’t get you first, and they’re far less kind.

All the ‘magic’ is in the payout tables, which on video poker and keno are literally posted on every machine. If you can read them, you can figure out exactly what the payout odds are for any machine.

There’s also a little note at the bottom stating that the video poker machine you’re looking at uses a 52 card deck.

Comments:

1- I have a slot machine and the code on the odds chip looks much like an excel spread sheet every combination is displayed in this spread sheet, so the exact odds can be listed an payout tables. The machine picks a random number. Let say 452 in 1000. the computer looks at the spread sheet and says that this is the combination of bar bar 7 and you get 2 credits for this combination. The wheels will spin to match the indication on the spread sheet. If I go into the game diagnostics I can see if it is a win or not, you do not win on what the wheels display, but the actual number from the spread sheet. The games knows if you won or lost before the wheels stop.

2- I had a conversation with a guy who had retired from working in casino security. He was also responsible for some setup and maintenance on slot machines, video poker and others. I asked about the infamous video poker machine that a programmer at the manufacturer had put in a backdoor so he and a few pals could get money. That was just before he’d started but he knew how it was done. IIRC there was a 25 step process of combinations of coin drops and button presses to make the machine hit a royal flush to pay the jackpot.

Slot machines that have mechanical reels actually run very large virtual reels. The physical reels have position encoders so the electronics and software can select which symbol to stop on. This makes for far more possible combinations than relying on the space available on the physical reels.

Those islands of machines with the sign that says 95% payout? Well, you guess which machine in the group is set to that payout % while the rest are much closer to the minimum allowed.

Machines with a video screen that gives you a choice of things to select by touch or button press? It doesn’t matter what you select, the outcome is pre-determined. For example, if there’s a grid of spots and the first three matches you get determines how many free spins you get, if the code stopped on giving you 7 free spins, out of a possible maximum of 25, you’re getting 7 free spins no matter which spots you touch. It will tease you with a couple of 25s, a 10 or 15 or two, but ultimately you’ll get three 7s, and often the 3rd 25 will be close to the other two or right next to the last 7 “you” selected to make you feel like you just missed it when the full grid is briefly revealed.

There was a Discovery Channel show where the host used various power tools to literally hack things apart to show their insides and how they worked. In one episode he sawed open a couple of slot machines, one from the 1960’s and a purely mechanical one from the 1930’s or possibly 1940’s. In that old machine he discovered the casino it had been in decades prior had installed a cheat. There was a metal wedge bolted into the notch for the 7 on one reel so it could never hit the 777 jackpot. I wondered if the Nevada Gaming Commission could trace the serial number and if they could levy a fine if the company that had owned and operated it was still in business.

3- Slightly off-topic. I worked for a company that sold computer hardware, one of our customers was the company that makes gambling machines. They said that they spent close to $0 on software and all their budget on licensing characters

This question is like asking why you would ever use int when you have the Integer class. Java programmers seem especially zealous about everything needing to be wrapped, and wrapped, and wrapped.

Yes, ArrayList<Integer> does everything that int[] does and more… but sometimes all you need to do is swat a fly, and you just need a flyswatter, not a machine-gun.

Did you know that in order to convert int[] to ArrrayList<Integer>, the system has to go through the array elements one at a time and box them, which means creating a garbage-collected object on the heap (i.e. Integer) for each individual int in the array? That’s right; if you just use int[], then only one memory alloc is needed, as opposed to one for each item.

I understand that most Java programmers don’t know about that, and the ones who do probably don’t care. They will say that this isn’t going to be the reason your program is running slowly. They will say that if you need to care about those kinds of optimizations, then you should be writing code in C++ rather than Java. Yadda yadda yadda, I’ve heard it all before. Personally though, I think that you should know, and should care, because it just seems wasteful to me. Why dynamically allocate n individual objects when you could just have a contiguous block in memory? I don’t like waste.

I also happen to know that if you have a blasé attitude about performance in general, then you’re apt to be the sort of programmer who unknowingly, unnecessarily writes four nested loops and then has no idea why their program took ten minutes to run even though the list was only 100 elements long. At that point, not even C++ will save you from your inefficiently written code. There’s a slippery slope here.

I believe that a software developer is a sort of craftsman. They should understand their craft, not only at the language level, but also how it works internally. They should convert int[] to ArrayList<Integer> only because they know the cost is insignificant, and they have a particular reason for doing so other than “I never use arrays, ArrayList is better LOL”.

Very similar, yes.

Both languages feature:

  • Static typing
  • nominative interface typing
  • garbage collection
  • class based
  • single dispatch polymorphism

so whilst syntax differs, the key things that separate OO support across languages are the same.

There are differences but you can write the same design of OO program in either language and it won’t look out of place

Last time I needed to write an Android app, even though I already knew Java, I still went with Kotlin 😀

I’d rather work in a language I don’t know than… Java… and yes, I know a decent Java IDE can auto-generate this code – but this only solves the problem of writing the code, it doesn’t solve the problem of having to read it, which happens a lot more than writing it.

I mean, which of the below conveys the programmer’s intent more clearly, and which one would you rather read when you forget what a part of the program does and need a refresher:

Even if both of them required no effort to write… the Java version is pure brain poison…

Because it’s insufficient to deal with the memory semantics of current computers. In fact, it was obsolete almost as soon as it first became available.

Volatile tells a compiler that it may not assume the value of a memory location has not changed between reads or writes. This is sometimes sufficient to deal with memory-mapped hardware registers, which is what it was originally for.

But that doesn’t deal with the semantics of a multiprocessor machine’s cache, where a memory location might be written and read from several different places, and we need to be sure we know when written values will be observable relative to control flow in the writing thread.

Instead, we need to deal with acquire/release semantics of values, and the compilers have to output the right machine instructions that we get those semantics from the real machines. So, the atomic memory intrinsics come to the rescue. This is also why inline assembler acts as an optimization barrier; before there were intrinsics for this, it was done with inline assembler. But intrinsics are better, because the compiler can still do some optimization with them.

C++ is a programming language specified through a standard that is “abstract” in various ways. For example, that standard doesn’t currently formally recognize a notion of “runtime” (I would actually like to change that a little bit in the future, but we’ll see).

Now, in order to allow implementations to make assumptions it removes certain situations from the responsibility of the implementation. For example, it doesn’t require (in general) that the implementation ensure that accesses to objects are within the bounds of those objects. By dropping that requirement, the code for valid accesses can be more efficient than would be required if out-of-bounds situations were the responsibility of the implementation (as is the case in most other modern programming languages). Those “situations” are what we call “undefined behaviour”: The implementation has no specific responsibilities and so the standard allows “anything” to happen. This is in part why C++ is still very successful in applications that call for the efficient use of hardware resources.

Note, however, that the standard doesn’t disallow an implementation from doing something that is implementation-specified in those “undefined behaviour” situations. It’s perfectly all right (and feasible) for a C++ implementation to be “memory safe” for example (e.g., not attempt access outside of object bounds). Such implementations have existed in the past (and might still exist, but I’m not currently aware of one that completely “contains” undefined behaviour).

ADDENDUM (July 16th, 2021):

The following article about undefined behavior crossed my metaphorical desk today:

 
May 26, 2021 Volume 19, issue 2 PDF Drill Bits Schrödinger’s Code Undefined behavior in theory and practice Terence Kelly with special guest borers Weiwei Gu and Vladimir Maksimovski Sanity vs. Speed Undefined behavior ranks among the most baffling and perilous aspects of popular programming languages. This installment of Drill Bits clears up widespread misconceptions and presents practical techniques to banish undefined behavior from your own code and pinpoint meaningless operations in any software—techniques that reveal alarming faults in software supporting business-critical applications at Fortune 500 companies. Early in the history of programming languages, two schools of thought diverged. Quicksort inventor C.A.R. Hoare summarized one philosophy in his Turing Award lecture: 7 The behavior of every syntactically correct program should be completely predictable from its source code. For the sake of safety, security, and programmer sanity, it must be impossible for a program to “run wild.” Ensuring well-defined behavior imposes runtime overheads (e.g., array bounds checks), but predictability justifies the cost. Today, “safe” languages such as Java embody Hoare’s advice. A different philosophy reigns in domains that demand efficiency and speed (e.g., infrastructure software). Systems programming languages such as C and C++ sacrifice safety and comprehensive semantics for performance. These languages, despite being meticulously standardized, do not define the behavior of all code that compiles. If a running program violates any one of myriad rules, all bets are off. The program might behave as intended, or crash, or corrupt priceless data, or serve an Internet villain. The computer might even catch fire—rogue software could literally fry the original IBM PC. 13 By declaring that certain coding errors yield undefined behavior, language standards permit compilers to skip runtime checks and optimize aggressively. They also shift the burden of ensuring predictability onto the programmer. Unfortunately, undefined behavior arises in many ways; appendix J.2 of the C standard lists scores, 2 and C++ adds many more. 3 This article surveys the most prominent pitfalls, presents examples from production software, and suggests practical ways to prevent and detect such bugs in serial code. An earlier Queue article by Hans-J. Boehm and Sarita V. Adve discusses undefined behavior in multithreaded software. 1 Guesswork Physical intuition misleads some developers into believing they can predict the behavior of software that executes undefined operations: “If defective track derails a locomotive, the train will go somewhere ,” they reason, concluding that we can know where. If pure reasoning can’t deduce the outcome, surely experiment must be definitive: “Like Schrödinger’s cat, undefined software exists in an indeterminate state only until we observe its behavior, whereupon something will happen.” Try it and see, says this mentality….




As a web developer can you explain why React is needed?

React

As a web developer, can you explain why React is needed?

React
React

In the early days of the internet, web sites were essentially made of static HTML files. Web servers were little more than file servers, when a user would come to a url, the web server would simply fetch it, and send it to the user via their browser, along with all kind of assets, like fonts and images.

The functionality of this kind of web pages are very limited, so eventually the web became more dynamic. When people would visit a page or interact with a form, instead of just fetching data, the server could perform an operation and prepare some content on demand. That content would still be sent to the user’s browser. There could also a little bit of code running on the browser, to animate pages, handle form and what not, but not very much.

2022 AWS Cloud Practitioner Exam Preparation

So up until around 2010, that was the dominant model. Code could be involved to generate content but the browser wouldn’t do much, most of the logic would happen on servers which would just send prepared content to the browser.

However, in the early 2010s, this paradigm started to shift. With HTML5/CSS3, the browser became much more capable, and so people started to move the logic that would generate content from the server to the browser. Instead of sending a whole styled HTML page, a web server could just send the data needed to create it. Then, code could run on the browser to actually turn that data into HTML. That browser code could also update what the user would see, making just the required data calls.

So, in the early to mid 2010s, front-end code would typically:

  • render complex web pages from data retrieved from back-end,
  • simulate “navigation” between different views: when the user would do some actions, the entire page would change, the url would update etc. but without actually loading a new page from the server.
  • maintain the state of an application: the application could track certain things about the user and the session, and won’t have to reload that information from the server all the time.
  • dynamically update both contents and style of a web page.

Now, all of this is possible to do in “vanilla javascript”. But it’s really cumbersome to implement it, and especially tricky to do it in a performant way. There are millions and millions of “web apps” that are replacing the static “web sites” of old, and which all need to dynamically render content. Should developers reimplement that from scratch each time?

Enter the web frameworks such as React. These frameworks are abstractions that let the developers focus on the logic of their web app (where the data comes from, how content is organized) without being tied to the nitty gritty. Web frameworks make developers organize their code in building blocks called modules or components. Somebody could write a header component and someone else building a page could reuse that header component. And a third developer could change the header component, and that change would be reflected everywhere the component is used. Folks could also build 3rd party libraries compatible with the web framework ecosystem, that would address common problems that many developers face. For instance, someone could create a date picker component (a notably tricky interface) that anyone can reuse and customize. Or create a solution to deal with very long pages by only rendering what is in the browser viewport, and creating/deleting elements as a user would scroll.

To have the support of this ecosystem is a huge productivity boost. There are millions of developers who work with React, and the most popular React libraries are very elegant solutions to hard problems(the same could be said of Angular, Vue etc. though their communities are a bit smaller).

React and web frameworks aren’t exactly needed, in fact there is a reverse trend in the last couple of years to go back to server generated content in some cases or to only use vanilla javascript, but it’s a very solid foundation to build a web app.

Comments:

1- The specific rationale for React is state management and efficient page updates, it’s underlying power comes not just from the structure and tooling provided by it being a framework, but also the shadow-DOM and component lifecycle that along with state management empower greater interactivity without very slow inefficient page updates.


Save 65% on select product(s) with promo code 65ZDS44X on Amazon.com

2- React isn’t needed, but it is a great framework that can reduce the amount of work you do in making a website/webapp.


React is great for widgets and implementing patterns. You can keep data/text separate from structure and behavior. React, angular and vue are all popular frameworks. Before that we used stuff like dust, handlebars, jQuery and UI libraries like dojo and jQuery UI.

Developers are always looking for ways to be more efficient and more maintainable. React is a current iteration tool for being more efficient.

3- It is needed as a pattern for the devs to create packages that will works (The React packages). In NPM there are many packages, but all them are following its own logic, docs or no docs, they are based on another packages, etc. With things like React, you are somehow limited to follow its rules and you are entering its ecosystem which is good. This is true for all frameworks/libraries.

React also has some configurations which follows the best practices (create-react-app, NextJS, etc), but this is the same and for others.

The difference is that React is close to JS and there is a lot of freedom, what to use like a package, starter pack, use or not Typescript

error: Content is protected !!