What are the top 5 common Python patterns when using dictionaries?

What are the top 5 common Python patterns when using dictionaries?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are the top 5 common Python patterns when using dictionaries?

In Python, a dictionary is a data structure that allows you to store data in a key/value format. This is similar to a Map in Java. A dictionary is mutable, which means you can add, remove, and update elements in a dictionary. Dictionaries are unordered, which means that the order in which you add elements to a dictionary is not preserved. Python dictionaries are extremely versatile data structures. They can be used to store data in a variety of ways and can be manipulated to perform a wide range of operations.

There are many different ways to use dictionaries in Python. In this blog post, we will explore some of the most popular patterns for using dictionaries in Python.

The first pattern is using the in operator to check if a key exists in a dictionary. This can be helpful when you want to avoid errors when accessing keys that may not exist.

The second pattern is using the get method to access values in a dictionary. This is similar to using the in operator, but it also allows you to specify a default value to return if the key does not exist.

The third pattern is using nested dictionaries. This is useful when you need to store multiple values for each key in a dictionary.

The fourth pattern is using the items method to iterate over the key-value pairs in a dictionary. This is handy when you need to perform some operation on each pair in the dictionary.

The fifth and final pattern is using the update method to merge two dictionaries together. This can be useful when you have two dictionaries with complementary data that you want to combine into one dictionary

1) Creating a Dictionary
You can create a dictionary by using curly braces {} and separating key/value pairs with a comma. Keys must be unique and must be immutable (i.e., they cannot be changed). Values can be anything you want, including another dictionary. Here is an example of creating a dictionary:

“`

python
dict1 = {‘a’: 1, ‘b’: 2, ‘c’: 3}
“`

 

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

2) Accessing Elements in a Dictionary
You can access elements in a dictionary by using square brackets [] and the key for the element you want to access. For example:
“`python
print(dict1[‘a’]) # prints 1
“`

If the key doesn’t exist in the dictionary, you will get a KeyError. You can avoid this by using the get() method, which returns None if the key doesn’t exist in the dictionary. For example: “`python print(dict1.get(‘d’)) # prints None “`

If you want to get all of the keys or values from a dictionary, you can use the keys() or values() methods. For example:

“`python
dict = {‘key1′:’value1’, ‘key2′:’value2’, ‘key3′:’value3’}
print(dict[‘key2’]) # Output: value2“`


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

““

python keys = dict1.keys() # gets all of the keys

print(keys)
dict_keys([‘a’, ‘b’, ‘c’])

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

values = dict1.values() # gets all of the values

print(values)
dict_values([1, 2, 3])

“`

3) Updating Elements in a Dictionary

You can update elements in a dictionary by using square brackets [] and assigning a new value to the key. For example:

“`

python dict1[‘a’] = 10

print(dict1[‘a’]) # prints 10

“`

You can add items to a dictionary by using the update() function. This function takes in an iterable (such as a list, string, or set) as an argument and adds each element to the dictionary as a key-value pair. If the key already exists in the dictionary, then the value of that key will be updated with the new value.

“`python
dict = {‘key1′:’value1’, ‘key2′:’value2’, ‘key3′:’value3’}
dict.update({‘key4′:’value4’, ‘key5’:’value5}) # Output: {‘key1’: ‘value1’, ‘key2’: ‘value2’, ‘key3’: ‘value3’, ‘key4’: ‘value4’, ‘key5’: ‘value5’}“`

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

4) Deleting Elements from a Dictionary

You can delete elements from a dictionary by using the del keyword and specifying the key for the element you want to delete. For example:

“`

python del dict1[‘c’]

print(dict1) # prints {‘a’: 10, ‘b’: 2}

“ `

You can remove items from a dictionary by using either the pop() or clear() functions. The pop() function removes an item with the given key and returns its value. If no key is specified, then it removes and returns the last item in the dictionary. The clear() function removes all items from the dictionary and returns an empty dictionary {} .

“`python
dict = {‘key1′:’value1’, ‘key2′:’value2’, ‘key3′:’value3’) dict[‘key1’] # Output: value1 dict[‘key4’] # KeyError >> dict = {}; dict[‘new key’]= “new value” # Output: {‘new key’ : ‘new value’} “`

 

5) Looping Through Elements in a Dictionary

You can loop through elements in a dictionary by using a for loop on either the keys(), values(), or items(). items() returns both the keys and values from the dictionary as tuples (key, value). For example:

“`python for key in dict1: print(“{}: {}”.format(key, dict1[key])) #prints each key/value pair for key, value in dict1.items(): print(“{}: {}”.format(key, value)) #prints each key/value pair #prints all of the values for value in dict1 .values(): print(“{}”.format(value))

6) For iterating around a dictionary and accessing the key and value at the same time:

  • for key, value in d.items(): 
  • …. 
Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

instead of :

  • for key in d: 
  • value = d[key] 
  • … 

7) For getting a value if the key doesn’t exist:

  • v = d.get(k, None) 

instead of:

  • if k in d: 
  • v = d[k] 
  • else: 
  • v = None 

8) For collating values against keys which can be duplicated.

  • from collections import defaultdict 
  • d = defaultdict(list) 
  • for key, value in datasource: 
  • d[key].append(value) 

instead of:

  • d = {} 
  • for key, value in datasource: 
  • if key in d: 
  • d[key].append[value] 
  • else: 
  • d[key] = [value] 

9) and of course if you find yourself doing this :

  • from collections import defaultdict 
  • d = defaultdict(int) 
  • for key in datasource: 
  • d[key] += 1 

then maybe you need to do this :

  • from collections import Counter 
  • c = Counter(datasource) 

Dictionaries are one of the most versatile data structures available in Python. As you have seen from this blog post, there are many different ways that they can be used to store and manipulate data. Whether you are just starting out with Python or are an experienced programmer, understanding how to use dictionaries effectively is essential to writing efficient and maintainable code.

Dictionaries are powerful data structures that offer a lot of flexibility in how they can be used. By understanding and utilizing these common patterns, you can leverage the power of dictionaries to write more efficient and effective Python code. Thanks for reading!

What are the top 5 common Python patterns when using dictionaries?
What are the top 5 common Python patterns when using dictionaries?

Google’s Carbon Copy: Is Google’s Carbon Programming language the Right Successor to C++?

What are the Greenest or Least Environmentally Friendly Programming Languages?

What are the Greenest or Least Environmentally Friendly Programming Languages?

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

 

Which programming language produces binaries that are the most difficult to reverse engineer?

Which programming language produces binaries that are the most difficult to reverse engineer?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Which programming language produces binaries that are the most difficult to reverse engineer?

Have you ever wondered how someone might go about taking apart your favorite computer program to figure out how it works? The process is called reverse engineering, and it’s done all the time by software developers in order to learn from other programs or to find security vulnerabilities. In this blog post, we’ll discuss why some programming languages make reverse engineering more difficult than others. We’re going to take a look at why binaries that were originally written in assembly code are generally the most difficult to reverse engineer.

Any given high-level programming language will compile down to assembly code before becoming a binary. Because of this, the level of difficulty in reverse engineering a binary is going to vary depending on the original high-level programming language.

Reverse Engineering

Reverse engineering is the process of taking something apart in order to figure out how it works. In the context of software, this usually means taking a compiled binary and figuring out what high-level programming language it was written in, as well as what the program is supposed to do. This can be difficult for a number of reasons, but one of the biggest factors is the level of optimization that was applied to the code during compilation.

In order to reverse engineer a program, one must first understand how that program was created. This usually involves decompiling the program into its original source code so that it can be read and understood by humans.

Once the source code has been decompiled, a reverse engineer can begin to understand how the program works and look for ways to modify or improve it. However, decompiling a program is not always a trivial task. It can be made significantly more difficult if the program was originally written in a language that produces binaries that are difficult to reverse engineer.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

Some Languages Are More Difficult to Reverse Engineer Than Others.

There are many factors that can make reversing a binary more difficult, but they all stem from the way that the compiled code is organized. For example, consider two different programs written in two different languages. Both programs do the same thing: print “Hello, world!” to the screen. One program is written in C++ and one is written in Java.

When these programs are compiled, the C++ compiler will produce a binary that is considerably smaller than the binary produced by the Java compiler. This is because C++ allows programmers to specify things like data types and memory layout explicitly, whereas Java relies on interpretation at runtime instead. As a result, C++ programs tend to be more efficient than Java programs when compiled into binaries.

However, this also means that C++ binaries are more difficult to reverse engineer than Java binaries. This is because all of the information about data types and memory layout is encoded in the binary itself instead of being stored separately in an interpreted programming language like Java. As a result, someone who wants to reverse engineer a C++ binary would need to spend more time understanding how the compiled code is organized before they could even begin to understand what it does.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Which programming language produces binaries that are the most difficult to reverse engineer?
Reverse Engineering SOftware

Optimization

Optimization is a process where the compiler tries to make the generated code run as fast as possible, with the goal of making the program take up less memory. This is generally accomplished by reorganizing the code in such a way that makes it harder for a human to read. For example, consider this simple C++ program:

int main() {
int x = 5;
int y = 10;
int z = x + y;
return z;
}
This would compile down to assembly code that looks something like this:

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

main: ; PC=0x1001000
mov eax, 5 ; PC=0x1001005
mov ebx, 10 ; PC=0x100100a
add eax, ebx ; PC=0x100100d
ret ; PC=0x100100e
As you can see, even this very simple program has been optimized to the point where it’s no longer immediately clear what it’s doing just by looking at it. If you were trying to reverse engineer this program, you would have a very difficult time understanding what it’s supposed to do just by looking at the assembly code.
Of course, there are ways to reverse engineer programs even if they’ve been heavily optimized. However, all things being equal, it’s generally going to be more difficult to reverse engineer a binary that was originally written in assembly code than one that was written in a higher-level language such as Java or Python. This is because compilers for higher-level languages typically don’t apply as much optimization to the generated code since humans are going to be reading and working with it directly anyways. As a result, binaries that were originally written in assembly tend to be more difficult to reverse engineer than those written in other languages.

Which programming language produces binaries that are the most difficult to reverse engineer?
Thesis Contributions Reverse Engineering

According to Tim Mensch, programming language producing binaries that are the most difficult to reverse engineer are probably anything that goes through a modern optimization backend like gcc or LLVM.

And note that gcc is now the GNU Compiler Collection, a backronym that they came up with after adding a number of frontend languages. In addition to C, there are frontends for C++, Objective-C, Objective-C++, Fortran, Ada, D, and Go, plus others that are less mature.

LLVM has even more options. The Wikipedia page lists ActionScript, Ada, C#, Common Lisp, PicoLisp, Crystal, CUDA, D, Delphi, Dylan, Forth, Fortran, Free Basic, Free Pascal, Graphical G, Halide, Haskell, Java bytecode, Julia, Kotlin, Lua, Objective-C, OpenCL, PostgreSQL’s SQL and PLpgSQL, Ruby, Rust, Scala, Swift, XC, Xojo and Zig.

I don’t even know what all of those languages are. In some cases they may include enough of a runtime to make it easier to reverse engineer the underlying code (I’m guessing the Lisp dialects and Haskell would, among others), but in general, once compiled to a target architecture with maximum optimization, all of the above would be more or less equally difficult to reverse engineer.

Languages that are more rare (like Zig) may have an advantage by virtue of doing things differently enough that existing decompilers would have trouble. But that’s only an incremental increase in difficulty.

There exist libraries that you can add to a binary to make it much more difficult to reverse engineer. Tools that prevent trivial disassembly or that make code fail if run in a debugger, for instance. If you really need to protect code that you have to distribute, then using one of those products might be appropriate.

But overall the only way to be sure that no one can reverse engineer your code (aside from nuking it from orbit, which has the negative side effect of eliminating your customer base) is to never distribute your code: Run anything proprietary on servers and only allow people with active accounts to use it.

Generally, though? 99.9% of code isn’t worth reverse engineering. If you’re not being paid by some large company doing groundbreaking research (and you’re not if you would ask this question) then no one will ever bother to reverse engineer your code. This is a really, really frequent “noob” question, though: Because it was so hard for a new developer to write an app, they fear someone will steal the code and use it in their own app. As if anyone would want to steal code written by a junior developer. 🙄

More to the point, stealing your app and distributing it illegally can generally be done without reverse engineering it at all; I guarantee that many apps on the Play Store are hacked and republished with different art without the thieves even slightly understanding how the app works. It’s only if you embed some kind of copy protection/DRM into your app that they’d even need to hack it, and if you’re not clever about how you add the DRM, hacking it won’t take much effort or any decompiling at all. If you can point a debugger at the code, you can simply walk through the assembly language and find where it does the DRM check—and disable it. I managed to figure out how to do this as a teen, on my own, pre-Internet (for research purposes, of course). I guarantee I’m not unique or even that skilled at it, but start to finish I disabled DRM in a couple hours at most.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

So generally, don’t even bother worrying about how difficult something is to reverse engineer. No one cares to see your code, and you can’t stop them from hacking the app if you add DRM. So unless you can keep your unique code on a server and charge a subscription, count on the fact that if your app gets popular, it will be stolen. People will also share subscription accounts, so you need to worry about that as well when you design your server architecture.

There are a lot of myths and misconceptions out there about binary reversing.

Myth #1: Reversing a Binary is Impossible
This is simply not true. Given enough time and effort, anyone can reverse engineer a binary. It may be difficult, but it’s certainly not impossible. The first step is to understand what the program is supposed to do. Once you have a basic understanding of the program’s functionality, you can start to reverse engineering the code. This process will help you understand how the program works and how to modify it to suit your needs.

Myth #2: You Need Special Tools to Reverse Engineer a Binary
Again, this is not true. All you really need is a text editor and a disassembler. A disassembler will take the compiled code and turn it into assembly code, which is much easier to read and understand.Once you have the assembly code, you can start to reverse engineer the program. You may find it helpful to use a debugger during this process so that you can step through the code and see what each instruction does. However, a debugger is not strictly necessary; it just makes the process easier. If you don’t have access to a debugger, you can still reverse engineer the program by tracing through the code manually.

Myth #3: Only Certain Types of Programs Can Be Reversed Engineered
This myth is half true. It’s certainly easier to reverse engineered closed-source programs than open-source programs because you don’t have access to the source code. However, with enough time and effort, you can reverse engineer any type of program. The key is to understand the program’s functionality and then start breaking down the code into smaller pieces that you can understand. Once you have a good understanding of how the program works, you can start to figure out ways to modify it to suit your needs.

In conclusion,

We can see that binaries compiled from assembly code are generally more difficult to reverse engineer than those from other high-level languages. This is due to the level of optimization that’s applied during compilation, which can make the generated code very difficult for humans to understand. However, with enough effort and expertise, it is still possible to reverse engineer any given binary.

So, which programming language produces binaries that are the most difficult to reverse engineer?

There is no definitive answer, as it depends on many factors including the specific features of the language and the way that those features are used by individual programmers. However, languages like C++ that allow for explicit control over data types and memory layout tend to produce binaries that are more difficult to reverse engineer than interpreted languages like Java.

Google’s Carbon Copy: Is Google’s Carbon Programming language the Right Successor to C++?

What are the Greenest or Least Environmentally Friendly Programming Languages?

What are popular hobbies among Software Engineers?

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

What are the Greenest or Least Environmentally Friendly Programming Languages?

What are the Greenest or Least Environmentally Programming Languages?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are the Greenest or Least Environmentally Friendly Programming Languages?

Technology has revolutionized the way we live, work, and play. It has also had a profound impact on the world of programming languages. In recent years, there has been a growing trend towards green, energy-efficient languages such as C and C++.  C++ and Rust are two of the most popular languages in this category. Both are designed to be more efficient than traditional languages like Java and JavaScript. And both have been shown to be highly effective at reducing greenhouse gas emissions. So if you’re looking for a language that’s good for the environment, these two are definitely worth considering.

The study below runs 10 benchmark problems in 28 languages [1]. It measures the runtime, memory usage, and energy consumption of each language. The abstract of the paper is shown below.

“This paper presents a study of the runtime, memory usage and energy consumption of twenty seven well-known software languages. We monitor the performance of such languages using ten different programming problems, expressed in each of the languages. Our results show interesting findings, such as, slower/faster languages consuming less/more energy, and how memory usage influences energy consumption. We show how to use our results to provide software engineers support to decide which language to use when energy efficiency is a concern”. [2]

According to the “paper,” in this study, they monitored the performance of these languages using different programming problems for which they used different algorithms compiled by the “Computer Language Benchmarks Game” project, dedicated to implementing algorithms in different languages.

The team used Intel’s Running Average Power Limit (RAPL) tool to measure power consumption, which can provide very accurate power consumption estimates.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

The research shows that several factors influence energy consumption, as expected. The speed at which they are executed in the energy consumption is usually decisive, but not always the one that runs the fastest is the one that consumes the least energy as other factors enter into the power consumption equation besides speed, as the memory usage.

Energy

From this table, it is worth noting that C, C++and Java are among the languages that consume the least energy. On the other hand, JavaScript consumes almost twice as much as Java and four times what C consumes. As an interpreted language, Python needs more time to execute and is, therefore, one of the least “green” languages, occupying the position of those that consume the most energy.

What are the Greenest or Least Environmentally Friendly Programming Languages?
What are the Greenest or Least Environmentally Friendly Programming Languages?


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Time:

The results are similar to the energy expenditure; the faster a programming language is, the less energy it expends.

Greenest Programming Languages

Memory

In terms of memory consumption, we see how Java has become one of the most memory-consuming languages along with JavaScript.

Memory ranking.

Ranking

In this ranking, we can see the “greenest” and most efficient languages are: C, C+, Rust, and Java, although this last one shoots the memory usage.

From the Paper: Normalized global results for Energy, Time, and Memory.

What are the Greenest or Least Environmentally Friendly Programming Languages?

To conclude: 

Most Environmentally Friendly Languages: C, Rust, and C++
Least Environmentally Friendly Languages: Ruby, Python, Perl

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Although this study may seem curious and without much practical application, it may help design better and more efficient programming languages. Also, we can use this new parameter in our equation when choosing a programing language.

This parameter can no longer be ignored in the future or almost the present; besides, the fastest languages are generally also the most environmentally friendly.

If you’re interested in something that is both green and energy efficient, you might want to consider the Groeningen Programming Language (GPL). Developed by a team of researchers at the University of Groningen in the Netherlands, GPL is a relatively new language that is based on the C and C++ programming languages. Python and Rust are also used in its development. GPL is designed to be used for developing energy efficient applications. Its syntax is similar to other popular programming languages, so it should be relatively easy for experienced programmers to learn. And since it’s open source, you can download and use it for free. So why not give GPL a try? It just might be the perfect language for your next project.

Top 10 Caveats – Counter arguments:

#1 C++ will perform better than Python to solve some simple algorithmic problems. C++ is a fairly bare-bone language with a medium level of abstraction, while Python is a high-level languages that relies on many external components, some of which have actually been written in C++. And of course C++ will be efficient than C# to solve some basic problem. But let’s see what happens if you build a complete web application back-end in C++.

#2: This isn’t much useful. I can imagine that the fastest (performance-wise) programming languages are greenest, and vice versa. However, running time is not only the factor here. An engineer may spend 5 minutes writing a Python script that does the job pretty well, and spends hours on debugging C++ code that does the same thing. And the performance difference on the final code may not differ much!

#3:  Has anyone actually taken a look at the winning C and Rust solutions? Most of them are hand-written assembly code masked as SSE intrinsic. That is the kind of code that only a handful of people are able to maintain, not to mention come up with. On the other hand, the Python solutions are pure Python code without a trace of accelerated (read: written in Fortran, C, C++, and/or Rust) libraries like NumPy used in all sane Python projects.

#4:  I used C++ years ago and now use Python, for saving energy consumption, I turn off my laptop when I got off work, I don’t use extra monitors, my AC is always set to 28 Celsius degree, I plan to change my car to electrical one, and I use Python.

#5: I disagree. We should consider the energy saved by the products created in those languages. For example, a C# – based Microsoft Teams allows people to work remotely. How much CO2 do we save that way? 😉

Now, try to do the same in C.

#6 Also, some Python programs, such as anything using NumPy, spend a considerable fraction of their cycles outside the Python interpreter in a C or C++ library..

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

I would love to see a scatterplot of execution time vs. energy usage as well. Given that modern CPUs can turbo and then go to a low-power state, a modest increase of energy usage during execution can pay dividends in letting the processor go to sleep quicker.

An application that vectorized heavily may end up having very high peak power and moderately higher energy usage that’s repaid by going to sleep much sooner. In the cell phone application processor business, we called that “race to sleep.” By Joe Zbiciak

#7  By Tim Mensch : It’s almost complete garbage.

If you look at the TypeScript numbers, they are more than 5x worse than JavaScript.

This has to mean they were running the TypeScript compiler every time they ran their benchmark. That’s not how TypeScript works. TypeScript should be identical to JavaScript. It is JavaScript once it’s running, after all.

Given that glaring mistake, the rest of their numbers are suspect.

I suspect Python and Ruby really are pretty bad given better written benchmarks I’ve seen, but given their testing issues, not as bad as they imply. Python at least has a “compile” phase as well, so if they were running a benchmark repeatedly, they were measuring the startup energy usage along with the actual energy usage, which may have swamped the benchmark itself.

PHP similarly has a compile step, but PHP may actually run that compile step every time a script is run. So of all of the benchmarks, it might be the closest.

I do wonder if they also compiled the C and C++ code as part of the benchmarks as well. C++ should be as optimized or more so than C, and as such should use the same or less power, unless you’re counting the compile phase. And if they’re also measuring the compile phase, then they are being intentionally deceptive. Or stupid. But I’ll go with deceptive to be polite. (You usually compile a program in C or C++ once and then you can run it millions or billions of times—or more. The energy cost of compiling is miniscule compared to the run time cost of almost any program.)

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

I’ve read that 80% of all studies are garbage. This is one of those garbage studies.

#8 By Chaim Solomon: This is nonsense

This is nonsense as it runs low-level benchmarks that benchmark basic algorithms in high-level languages. You don’t do that for anything more than theoretical work.

Do a comparison of real-world tasks and you should find less of a spread.

Do a comparison of web-server work or something like that – I guess you may find a factor of maybe 5 or 10 – if it’s done right.

Don’t do low-level algorithms in a high-level language for anything more than teaching. If you need such an algorithm – the way to do it is to implement it in a library as a native module. And then it’s compiled to machine code and runs as fast as any other implementation.

#9 By Tim Mensch

It’s worse than nonsense. TypeScript complies directly to JavaScript, but gets a crazy worse rating somehow?!

#10 By Tim Mensch

For NumPy and machine learning applications, most of the calculations are going to be in C.

The world I’ve found myself in is server code, though. Servers that run 24/7/365.

And in that case, a server written in C or C++ will be able to saturate its network interface at a much lower continuous CPU load than a Python or Ruby server can. So in that respect, the latter languages’ performance issues really do make a difference in ongoing energy usage.

But as you point out, in mobile there could be an even greater difference due to the CPU being put to sleep or into a low power mode if it finishes its work more quickly.

 

Programming, Coding and Algorithms Questions and Answers

What is the single most influential book every Programmers should read

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Programming, Coding and Algorithms Questions and Answers.

Coding is a complex process that requires precision and attention to detail. While there are many resources available to help learn programming, it is important to avoid making some common mistakes. One mistake is assuming that programming is easy and does not require any prior knowledge or experience. This can lead to frustration and discouragement when coding errors occur. Another mistake is trying to learn too much at once. Coding is a vast field with many different languages and concepts. It is important to focus on one area at a time and slowly build up skills. Finally, another mistake is not practicing regularly. Coding is like any other skill- it takes practice and repetition to improve. By avoiding these mistakes, students will be well on their way to becoming proficient programmers.

In addition to avoiding these mistakes, there are certain things that every programmer should do in order to be successful. One of the most important things is to read coding books. Coding books provide a comprehensive overview of different languages and concepts, and they can be an invaluable resource when starting out. Another important thing for programmers to do is never stop learning. Coding is an ever-changing field, and it is important to keep up with new trends and technologies.

Coding is a process of transforming computer instructions into a form a computer can understand. Programs are written in a particular language which provides a structure for the programmer and uses specific instructions to control the sequence of operations that the computer carries out. The programming code is written in and read from a text editor, which in turn is used to produce a software program, application, script, or system.

When you’re starting to learn programming, it’s important to have the right tools and resources at your disposal. Coding can be difficult, but with the proper guidance it can also be rewarding.

This blog is an aggregate of  clever questions and answers about Programming, Coding, and Algorithms. This is a safe place for programmers who are interested in optimizing their code, learning to code for the first time, or just want to be surrounded by the coding environment. 

CodeMonkey Discount Code

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

155 x 65

” width=”150″ height=”63″>


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

I think, the most common mistakes I witnessed or made myself when learning is:

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

1: Trying to memorize every language construction. Do not rely on your memory, use stack overflow.

2: Spend a lot of time solving an issue yourself, before you google it. Just about every issue you can stumble upon, is in 99.99% cases already has been solved by someone else. Learn to properly search for solutions first.

3: Spending a couple of days on a task and realizing it was not worth it. If the time you spend on a single problem is more than halve an hour then you probably doing it wrong, search for alternatives.

4: Writing code from a scratch. Do not reinvent a bicycle, if you need to write a blog, just search a demo application in a language and a framework you chose, and build your logic on top of it. Need some other feature? Search another demo incorporating this feature, and use its code.

In programming you need to be smart, prioritize your time wisely. Diving in a deep loopholes will not earn you good money.

Because implicit is better than explicit¹.

def onlyAcceptsFooable(bar): 

   bar.foo() 

Congratulations, you have implicitly defined an interface and a function that requires its parameter to fulfil that interface (implicitly).

How do you know any of this? Oh, no problem, just try using the function, and if it fails during runtime with complaints about your bar missing a foo method, you will know what you did wrong.  By Paulina Jonušaitė

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

List of Freely available programming books – What is the single most influential book every Programmers should read

Source: Wikipedia

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Best != easy and easy != best. Interpreted BASIC is easy, but not great for programming anything more complex than tic-tac-toe. C++, C#, and Java are very widely used, but none of them are what I would call easy.

Is Python an exception? It’s a fine scripting language if performance isn’t too critical. It’s a fine wrapper language for libraries coded in something performant like C++. Python’s basics are pretty easy, but it is not easy to write large or performant programs in Python.

Like most things, there is no shortcut to mastery. You have to accept that if you want to do anything interesting in programming, you’re going to have to master a serious, not-easy programming language. Maybe two or three. Source.

Type declarations mainly aren’t for the compiler — indeed, types can be inferred and/or dynamic so you don’t have to specify them.

They’re there for you. They help make code readable. They’re a form of active, compiler-verified documentation.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

For example, look at this method/function/procedure declaration:

locate(tr, s) { … } 

  • What type is tr?
  • What type is s?
  • What type, if any, does it return?
  • Does it always accept and return the same types, or can they change depending on values of tr, s, or system state?

If you’re working on a small project — which most JavaScript projects are — that’s not a problem. You can look at the code and figure it out, or establish some discipline to maintain documentation.

If you’re working on a big project, with dozens of subprojects and developers and hundreds of thousands of lines of code, it’s a big problem. Documentation discipline will get forgotten, missed, inconsistent or ignored, and before long the code will be unreadable and simple changes will take enormous, frustrating effort.

But if the compiler obligates some or all type declarations, then you say this:

Node locate(NodeTree tr, CustomerName s) { … }

Now you know immediately what type it returns and the types of the parameters, you know they can’t change (except perhaps to substitutable subtypes); you can’t forget, miss, ignore or be inconsistent with them; and the compiler will guarantee you’ve got the right types.

That makes programming — particularly in big projects — much easier. Source: Dave Voorhis

  • COBOL. Verbose like no other, excess structure, unproductive, obtuse, limited, rigid.
  • JavaScript. Insane semantics, weak typing, silent failure. Thankfully, one can use transpilers for more rationally designed languages to target it (TypeScript, ReScript, js_of_ocaml, PureScript, Elm.)
  • ActionScript. Macromedia Flash’s take on ECMA 262 (i.e., ~JavaScript) back in the day. It’s static typing was gradual so the compiler wasn’t big on type error-catching. This one’s thankfully deader than Disco.
  • BASIC. Mandatory line numbering. Zero standardization. Not even a structured language — you’ve never seen that much spaghetti code.
  • In the real of dynamically typed languages, anything that is not in the Lisp family. To me, Lisps just are a more elegant and richer-featured than the rest.  Alexander feterman

Object-oriented programming is “a programming model that organizes software design around data, or objects, rather than functions and logic.”

Most games are made of “objects” like enemies, weapons, power-ups etc. Most games map very well to this paradigm. All the objects are in charge of maintaining their own state, stats and other data. This makes it incredibly easier for a programmer to develop and extend video games based on this paradigm.

I could go on, but I’d need an easel and charts. Chrish Nash

Ok…I think this is one of the most important questions to answer. According to the my personal experience as a Programmer, I would say you must learn following 5 universal core concepts of programming to become a successful Java programmer.

(1) Mastering the fundamentals of Java programming Language – This is the most important skill that you must learn to become successful java programmer. You must master the fundamentals of the language, specially the areas like OOP, Collections, Generics, Concurrency, I/O, Stings, Exception handling, Inner Classes and JVM architecture.

Recommended readings are OCA Java SE 8 Programmer by by Kathy Sierra and Bert Bates (First read Head First Java if you are a new comer ) and Effective Java by Joshua Bloch.

(2) Data Structures and Algorithms – Programming languages are basically just a tool to solve problems. Problems generally has data to process on to make some decisions and we have to build a procedure to solve that specific problem domain. In any real life complexity of the problem domain and the data we have to handle would be very large. That’s why it is essential to knowing basic data structures like Arrays, Linked Lists, Stacks, Queues, Trees, Heap, Dictionaries ,Hash Tables and Graphs and also basic algorithms like Searching, Sorting, Hashing, Graph algorithms, Greedy algorithms and Dynamic Programming.

Recommended readings are Data Structures & Algorithms in Java by Robert Lafore (Beginner) , Algorithms Robert Sedgewick (intermediate) and Introduction to Algorithms-MIT press by CLRS (Advanced).

(3) Design Patterns – Design patterns are general reusable solution to a commonly occurring problem within a given context in software design and they are absolutely crucial as hard core Java Programmer. If you don’t use design patterns you will write much more code, it will be buggy and hard to understand and refactor, not to mention untestable and they are really great way for communicating your intent very quickly with other programmers.

Recommended readings are Head First Design Patterns Elisabeth Freeman and Kathy Sierra and Design Patterns: Elements of Reusable by Gang of four.

(4) Programming Best Practices – Programming is not only about learning and writing code. Code readability is a universal subject in the world of computer programming. It helps standardize products and help reduce future maintenance cost. Best practices helps you, as a programmer to think differently and improves problem solving attitude within you. A simple program can be written in many ways if given to multiple developers. Thus the need to best practices come into picture and every programmer must aware about these things.

Recommended readings are Clean Code by Robert Cecil Martin and Code Complete by Steve McConnell.

(5) Testing and Debugging (T&D) – As you know about the writing the code for specific problem domain, you have to learn how to test that code snippet and debug it when it is needed. Some programmers skip their unit testing or other testing methodology part and leave it to QA guys. That will lead to delivering 80% bugs hiding in your code to the QA team and reduce the productivity and risking and pushing your project boundaries to failure. When a miss behavior or bug occurred within your code when the testing phase. It is essential to know about the debugging techniques to identify that bug and its root cause.

Recommended readings are Debugging by David Agans and A Friendly Introduction to Software Testing by Bill Laboon.

I hope these instructions will help you to become a successful Java Programmer. Here i am explain only the universal core concepts that you must learn as successful programmer. I am not mentioning any technologies that Java programmer must know such as Spring, Hibernate, Micro-Servicers and Build tools, because that can be change according to the problem domain or environment that you are currently working on…..Happy Coding!

 

Hard to be balanced on this one.

They are useful to know. If ever you need to use, or make a derivative of algorithm X, then you’ll be glad you took the time.

If you learn them, you’ll learn general techniques: sorting, trees, iteration, transformation, recursion. All good stuff.

You’ll get a feeling for the kinds of code you cannot write if you need certain speeds or memory use, given a certain data set.

You’ll pass certain kinds of interview test.

You’ll also possibly never use them. Or use them very infrequently.

If you mention that on here, some will say you are a lesser developer. They will insist that the line between good and not good developers is algorithm knowledge.

That’s a shame, really.

In commercial work, you never start a day thinking ‘I will use algorithm X today’.

The work demands the solution. Not the other way around.

This is yet another proof that a lot of technical sounding stuff is actual all about people. Their investment in something. Need for validation. Preference.

The more you know in development, the better. But I would not prioritize algorithms right at the top, based on my experience. Alan Mellor

So you’re inventing a new programming language and considering whether to write either a compiler or an interpreter for your new language in C or C++?

The only significant disadvantage of C++ is that in the hands of bad programmers, they can create significantly more chaos in C++ than they can in C.

But for experienced C++ programmers, the language is immensely more powerful than C and writing clear, understandable code in C++ can be a LOT easier.

INCIDENTALLY:

If you’re going to actually do this – then I strongly recommend looking at a pair of tools called “flex” and “bison” (which are OpenSourced versions of the more ancient “lex” and “yacc”). These tools are “compiler-compilers” that are given a high level description of the syntax of your language – and automatically generate C code (which you can access from C++ without problems) to do the painful part of generating a lexical analyzer and a syntax parser. Steve Baker

Did you know you can google this answer yourself? Search for “c++ private keyword” and follow the link to access specifiers, which goes into great detail and has lots of examples. In case google is down, here’s a brief explanation of access specifiers:

  • The private access specifier in a class or struct definition makes declarations that occur after the specifier. A private declaration is visible only inside the class/struct, and not in derived classes or structs, and not from outside.
  • The protected access specifier makes declarations visible in the current class/struct and also in derived classes and structs, but not visible from outside. protected is not used very often and some wise people consider it a code smell.
  • The public access specifier makes declarations visible everywhere.
  • You can also use access specifiers to control all the items in a base class. By Kurt Guntheroth

Rust programmers do mention the obvious shortcomings of the language.

Such as that a lot of data structures can’t be written without unsafe due to pointer complications.

Or that they haven’t agreed what it means to call unsafe code (although this is somewhat of a solved problem, just like calling into assembler from C0 in the sysbook).

The main problem of the language is that it doesn’t absolve the programmers from doing good engineering.

It just catches a lot of the human errors that can happen despite such engineering. Jonas Oberhauser.

Comparing cross-language performance of real applications is tricky. We usually don’t have the resources for writing said applications twice. We usually don’t have the same expertise in multiple languages. Etc. So, instead, we resort to smaller benchmarks. Occasionally, we’re able to rewrite a smallish critical component in the other language to compare real-world performance, and that gives a pretty good insight. Compiler writers often also have good insights into the optimization challenges for the language they work on.

My best guess is that C++ will continue to have a small edge in optimizability over Rust in the long term. That’s because Rust aims at a level of memory safety that constrains some of its optimizations, whereas C++ is not bound to such considerations. So I expect that very carefully written C++ might be slightly faster than equivalent very carefully written Rust.

However, that’s perhaps not a useful observation. Tiny differences in performance often don’t matter: The overall programming model is of greater importance. Since both languages are pretty close in terms of achievable performance, it’s going to be interesting watching which is preferable for real-life engineering purposes: The safe-but-tightly-constrained model of Rust or the more-risky-but-flexible model of C++.  By David VandeVoorde

  1. Lisp does not expose the underlying architecture of the processor, so it can’t replace my use of C and assembly.
  2. Lisp does not have significant statistical or visualization capabilities, so it can’t replace my use of R.
  3. Lisp was not built with unix filesystems in mind, so it’s not a great choice to replace my use of bash.
  4. Lisp has nothing at all to do with mathematical typesetting, so won’t be replacing LATEXLATEX anytime soon.
  5. And since I use vim, I don’t even have the excuse of learning lisp so as to modify emacs while it’s running.

In fewer words: for the tasks I get paid to do, lisp doesn’t perform better than the languages I currently use. By Barry RoundTree

What are some things that only someone who has been programming 20-50 years would know?

The truth of the matter gained through the multiple decades of (my) practice (at various companies) is ugly, not convenient and is not what you want to hear.

  1. The technical job interviews are non indicative and non predictive waste of time, that is, to put it bluntly, garbage (a Navy Seal can be as brave is (s)he wants to be during the training, but only when the said Seal meets the bad guys face to face on the front line does her/his true mettle can be revealed).
  2. An average project in an average company, both averaged the globe over, is staffed with mostly random, technically inadequate, people who should not be doing what they are doing.
  3. Such random people have no proper training in mathematics and computer science.
  4. As a result, all the code generated by these folks out there is flimsy, low quality, hugely not efficient, non scalable, non maintainable, hardly readable steaming pile of spaghetti mess – the absence of structure, order, discipline and understanding in one’s mind is reflected at the keyboard time 100 percent.
  5. It is a major hail mary, a hallelujah and a standing ovation to the genius of Alan Turing for being able to create a (Turing) Machine that, on the one hand, can take this infinite abuse and, on the other hand, being nothing short of a miracle, still produce binaries that just work. Or so they say.
  6. There is one and only one definition of a computer programmer: that of a person who combines all of the following skills and abilities:
    1. the ability to write a few lines of properly functioning (C) code in the matter of minutes
    2. the ability to write a few hundred lines of properly functioning (C) code in the matter of a small number of hours
    3. the ability to write a few thousand lines of properly functioning (C) code in the matter of a small number of weeks
    4. the ability to write a small number of tens of thousands of lines of properly functioning (C) code in the matter of several months
    5. the ability to write several hundred thousand lines of properly functioning (C) code in the matter of a small number of years
    6. the ability to translate a given set of requirements into source code that is partitioned into a (large) collection of (small and sharp) libraries and executables that work well together and that can withstand a steady-state non stop usage for at least 50 years
  7. It is this ability to sustain the above multi-year effort during which the intellectual cohesion of the output remains consistent and invariant is what separates the random amateurs, of which there is a majority, from the professionals, of which there is a minority in the industry.
  8. There is one and only one definition of the above properly functioning code: that of a code that has a check mark in each and every cell of the following matrix:
    1. the code is algorithmically correct
    2. the code is easy to read, comprehend, follow and predict
    3. the code is easy to debug
      1. the intellectual effort to debug code, symbolized as E(d)E(d), is strictly larger than the intellectual effort to write code, symbolized as E(w)E(w). That is: E(d)>E(w)E(d)>E(w). Thus, it is entirely possible to write a unit of code that even you, the author, can not debug
    4. the code is easy to test
      1. in different environments
    5. the code is efficient
      1. meaning that it scales well performance-wise when the size of the input grows without bound in both configuration and data
    6. the code is easy to maintain
      1. the addition of new and the removal or the modification of the existing features should not take five metric tons of blood, three years and a small army of people to implement and regression test
      2. the certainty of and the confidence in the proper behavior of the system thus modified should by high
      3. (read more about the technical aspects of code modification in the small body of my work titled “Practical Design Patterns in C” featured in my profile)
      4. (my claim: writing proper code in general is an optimization exercise from the theory of graphs)
    7. the code is easy to upgrade in production
      1. lifting the Empire State Building in its entirety 10 feet in the thin blue air and sliding a bunch of two-by-fours underneath it temporarily, all the while keeping all of its electrical wires and the gas pipes intact, allowing the dwellers to go in and out of the building and operating its elevators, should all be possible
      2. changing the engine and the tires on an 18-wheeler truck hauling down a highway at 80 miles per hour should be possible
  9. A project staffed with nothing but technically capable people can still fail – the team cohesion and the psychological compatibility of team members is king. This is raw and unbridled physics – a team, or a whole, is more than the sum of its members, or parts.
  10. All software project deadlines without exception are random and meaningless guesses that have no connection to reality.
  11. Intelligence does not scale – a million fools chained to a million keyboards will never amount to one proverbial Einstein. Source
 

A function pulls a computation out of your program and puts it in a conceptual box labeled by the function’s name. This lets you use the function name in a computation instead of writing out the computation done by the function.

Writing a function is like defining an obscure word before you use it in prose. It puts the definition in one place and marks it out saying, “This is the definition of xxx”, and then you can use the one word in the text instead of writing out the definition.

Even if you only use a word once in prose, it’s a good idea to write out the definition if you think that makes the prose clearer.

Even if you only use a function once, it’s a good idea to write out the function definition if you think it will make the code clearer to use a function name instead of a big block of code. Source.

Conditional statements of the form if this instance is type T then do X can generally — and usually should — be removed by appropriate use of polymorphism.

All conditional statements might conceivably be replaced in that fashion, but the added complexity would almost certainly negate its value. It’s best reserved for where the relevant types already exist.

Creating new types solely to avoid conditionals sometimes makes sense (e.g. maybe create distinct nullable vs not-nullable types to avoid if-null/if-not-null checks) but usually doesn’t. Source.

Something bad happens as your Java code runs.

Throw an exception.

The following lines after the throw do not run, saving them from the bad thing.

control is handed back up the call stack until Java runtime finds a catch() statement that matches the exception.

The code resumes running from there. Source: Allan Mellor

Google has better programmers, and they’ve been working on the problem space longer than either Spotify or the other providers have existed.

YouTube has a year and a half on Spotify, for example, and they’ve been employing a lot of “organ bank” engineers from Google proper, for various problems — like the “similar to this one“ problem — and the engineers doing the work are working on much larger teams, overall.

Spotify is resource starved, because they really aren’t raking in the same ratio of money that YouTube does. By Terry Lambert

Over the past two decades, Java has moved from a fairly simple ecosystem, with the relatively straightforward ANT build tool, to a sophisticated ecosystem with Maven or gradle basically required. As a result, this kind of approach doesn’t really work well anymore. I highly recommend that you download the community edition of IntelliJ IDEA; this is a free version of a great commercial IDE. By Joshua Gross

Best bet is to turn it into a record type as a pure data structure. Then you can start to work on that data. You might do that direct, or use it to construct some OOP objects with application specific behaviours on them. Up to you.

You can decide how far to take layering as well. Small apps work ok with the data struct in the exact same format as the JSON data passed around. But you might want to isolate that and use a mapping to some central domain model. Then if the JSON schema changes, your domain model won’t.

Libraries such as Jackson and Gson can handle the conversion. Many frameworks have something like it built in, so you get delivered a pure data struct ‘object’ containing all the data that was in the JSON

Things like JSON Validator and JSV Schemas can help you validate the response JSON if need be. By Alan Mellor

Keith Adams already gave an excellent overview of Slack’s technology stack so I will do my best to add to his answer.

Products that make up Slack’s tech stack include: Amazon (CloudFront, CloudSearch, EMR, Route 53, Web Services), Android Studio, Apache (HTTP Server, Kafka, Solr, Spark, Web Server), Babel, Brandfolder, Bugsnag, Burp Suite, Casper Suite, Chef, DigiCert, Electron, Fastly, Git, HackerOne, JavaScript, Jenkins, MySQL, Node.js, Objective-C, OneLogin, PagerDuty, PHP, Redis, Smarty, Socket, Xcode, and Zeplin.

Additionally, here’s a list of other software products that Slack is using internally:

  • Marketing: AdRoll, Convertro, MailChimp, SendGrid
  • Sales and Support: Cnflx, Front, Typeform, Zendesk
  • Analytics: Google Analytics, Mixpanel, Optimizely, Presto
  • HR: AngelList Jobs, Culture Amp, Greenhouse, Namely
  • Productivity: ProductBoard, Quadro, Zoom, Slack (go figure!)

For a complete list of software used by Slack, check out: Slack’s Stack on Siftery

Some other fun facts about Slack:

  • Slack is used by 55% of Unicorns (and 59% of B2B Unicorns)
  • Slack has 85% market share in Siftery’s Instant Messaging category on Siftery
  • Slack is used by 42% of both Y Combinator and 500 Startups companies
  • 35% of companies in the Sharing Economy use Slack

(Disclaimer: The above data was pulled from Siftery and has been verified by individuals working at Slack) By Gerry Giacoman Colyer

Programmers should use recursion when it is the cleanest way to define a process. Then, WHEN AND IF IT MATTERS, they should refine the recursion and transform it into a tail recursion or a loop. When it doesn’t matter, leave it alone. Jamie Lawson
 
 

Your phone runs a version of Linux, which is programmed in C. Only the top layer is programmed in java, because performance usually isn’t very important in that layer.

Your web browser is programmed in C++ or Rust. There is no java anywhere. Java wasn’t secure enough for browser code (but somehow C++ was? Go figure.)

Your Windows PC is programmed mostly in C++. Windows is very old code, that is partially C. There was an attempt to recode the top layer in C#, but performance was not good enough, and it all had to be recoded in C++. Linux PCs are coded in C.

Your intuition that most things are programmed in java is mistaken. Kurt Guntheroth

That’s not possible in Java, or at least the language steers you away from attempting that.

Global variables have significant disadvantages in terms of maintainability, so the language itself has no way of making something truly global.

The nearest approach would be to abuse some language features like so:

  • public class Globals { 
  • public static int[] stuff = new int [10]; 

Then you can use this anywhere with

  • Globals.stuff[0] = 42; 

Java isn’t Python, C nor JavaScript. It’s reasonably opinionated about using Object Oriented Programming, which the above snippets are not examples of.

This also uses a raw array, which is a fixed size in Java. Again, not very useful, we prefer ArrayList for most purposes, which can grow.

I’d recommend the above approach if and only if you have no alternatives, are not really wanting to learn Java and just need a dirty utility hack, or are starting out in programming just finding your feet. Alan Mellor

In which situations is NoSQL better than relational databases such as SQL? What are specific examples of apps where switching to NoSQL yielded considerable advantages?

Warning: The below answer is a bit oversimplified, for pedagogical purposes. Picking a storage solution for your application is a very complex issue, and every case will be different – this is only meant to give an overview of the main reason why people go NoSQL.

There are several possible reasons that companies go NoSQL, but the most common scenario is probably when one database server is no longer enough to handle your load. noSQL solutions are much more suited to distribute load over shitloads of database servers.

This is because relational databases traditionally deal with load balancing by replication. That means that you have multiple slave databases that watches a master database for changes and replicate them to themselves. Reads are made from the slaves, and writes are made to the master. This works to a certain level, but it has the annoying side-effect that the slaves will always lag slightly behind, so there is a delay between the time of writing and the time that the object is available for reading, which is complex and error-prone to handle in your application. Also, the single master eventually becomes a bottleneck no matter how powerful it is. Plus, it’s a single point of failure.

NoSQL generally deals with this problem by sharding. Overly simplified it means that users with userid 1-1000000 is on server A, and users with userid 1000001-2000000 is on server B and so on. This solves the problems that relational replication has, but the drawback is that features such as aggregate queries (SUM, AVG etc) and traditional transactions are sacrificed.

For some case studies, I believe Couchbase pimps a whitepaper on their web site here: http://www.couchbase.com/why-nosql/use-cases .  Mattias Peter Johansson

Chrome is coded in C++, assembler and Python. How could three different languages ​​be used to obtain only one product? What is the method used to merge programming languages ​​to create software?

Concretely, a processor can correctly receive only one kind of instruction, the assembler. This may also depend on the type of processor.

As the assembler requires several operations just to make a simple addition, we had to create compilers which, starting from a higher level language (easier to write), are able to automatically generate the assembly code.

These compilers can sometimes receive several languages. For example the GCC compiler allows to compile C and C++, and it also supports to receive pieces of assembler inside, defined by a keyword __asm__ . The assembler is still something to avoid absolutely because it is completely dependent on the machine and can therefore be a source of interference and unpleasant surprises.

More generally, we also often create multi-language applications using several components (libraries, or DLLs, activeX, etc.) The interfaces between these components are managed by the operating systems and allow Java to coexist happily. , C, C++, C#, Python, and everything you could wish for. A certain finesse is however necessary in the transitions between languages ​​because each one has its implicit rules which must therefore be enforced very explicitly.

For example, an object coming from the C++ world, transferred by these interfaces in a Java program will have to be explicitly destroyed, the java garbage collector only supports its own objects.

Another practical interface is web services, each module, whatever its technology, can communicate with the others by sending itself serialized objects in json… which is much less a source of errors!  Source:  Vincent Steyer

What is the most dangerous code you have ever seen?

This line removes the filesystem (starting from root /)
  • sudo rm -rf –no-preserve-root /
Or for more fun, a Russian roulette:
  • [ $[ $random % 6 ] == 0 ] && rm -rf –no-preserve-root / || echo *clic* 

(a chance in 6 of falling on the first part described above, otherwise “click” is displayed)

Javascript (or more precisely ECMAScript). And it’s a lot faster than the others. Surprised?

When in 2009 I heard about Node.js, I though that people had lost their mind to use Javascript on the server side. But I had to change my mind.

Node.js is lighting fast. Why? First of all because it is async but with V8, the open source engine of Google Chrome, even the Javascript language itself become incredibly fast. The war of the browsers brought us hyper-optimized Javascript interpreters/compilers.

In intensive computational algorithms, it is more than one order of magnitude faster than PHP (programming language)Ruby, and Python. In fact with V8 (http://code.google.com/p/v8/ ), Javascript became the fastest scripting language on earth.

Does it sound too bold? Look at the benchmarks: http://shootout.alioth.debian.org/

Note: with regular expressions, V8 is even faster than C and C++! Impossible? The reason is that V8 compiles native machine code ad-hoc for the specific regular expressions (see http://blog.chromium.org/2009/02/irregexp-google-chromes-new-regexp.html )

If you are interested, you can learn how to use node: http://www.readwriteweb.com/hack/2011/04/6-free-e-books-on-nodejs.php 🙂

Regarding the language Javascript is not the most elegant language but it is definitely a lot better than what some people may think. The current version of Javascript (or better ECMAScript as specified in ECMA-262 5th edition) is good. If you adopt “use strict”, some strange and unwanted behaviors of the language are eliminated. Harmony, the codename for a future version, is going to be even better and add some extra syntactical sugar similar to some Python’s constructs.

If you want to learn Javascript (not just server side), the best book is Professional Javascript for Web Developers by Nicholas C. Zakas. But if you are cheap, you can still get a lot from http://eloquentjavascript.net/ and http://addyosmani.com/resources/essentialjsdesignpatterns/book/

Does Javascript still sound too archaic? Try Coffeescript (from the same author of Backbone.js) that compiles to Javascript. Coffescript makes cleaner, easier and more concise programming on environments that use Javascript (i.e. the browser and Node.js). It’s a relatively new language that is not perfect yet but it is getting better: http://coffeescript.org/

source: Here

In general, the important advantage of C++ is that it uses computers very efficiently, and offers developers a lot of control over expensive operations like dynamic memory management. Writing in C++ versus Java or python is the difference between spinning up 1,000 cloud instances versus 10,000. The cost savings in electricity alone justifies the cost of hiring specialist programmers and dealing with the difficulties of writing good C++ code. Source

You really need to understand C++ pretty well to have any idea why Rust is the way it is. If you only want to work at Mozilla, learn Rust. Otherwise learn C++ and then switch to Rust if it breaks out and becomes more popular.

Rust is one step forward and two steps back from C++. Embedding the notion of ownership in the language is an obvious improvement over C++. Yay. But Rust doesn’t have exceptions. Instead, it has a bunch of strange little features to provide the RAII’ish behavior that makes C++ really useful. I think on average people don’t know how to teach or how to use exceptions even still. It’s too soon to abandon this feature of C++. Source: Kurt Guntheroth

Java or Javascript-based web applications are the most common. (Yuk!) And, consequently, you’ll be a “dime a dozen” programmer if that’s what you do.

On the hand, (C++ or C) embedded system programming (i.e. hardware-based software), high-capacity backend servers in data centers, internet router software, factory automation/robotics software, and other operating system software are the least common, and consequently the most in demand. Source: Steven Ussery

I want to learn to program. Should I begin with Java or Python?

Your first language doesn’t matter very much. Both Java and Python are common choices. Python is more immediately useful, I would say.

When you are learning to program, you are learning a whole bunch of things simultaneously:

  • How to program
  • How to debug programs that aren’t working
  • How to use programming tools
  • A language
  • How to learn programming languages
  • How to think about programming
  • How to manage your code so you don’t paint yourself into corners, or end up with an unmanageable mess
  • How to read documentation

Beginners often focus too much on their first language. It’s necessary, because you can’t learn any of the others without that, but you can’t learn how to learn languages without learning several… and that means any professional knows a bunch and can pick up more as required. Source: Andrew  McGregor

Absolutely.

If you’re a backend or full-stack engineer, it’s reasonable to focus on your preferred tech, but you’ll be expected to have at least some familiarity with Java, C#, Python, PHP, bash, Docker, HTML/CSS…

And, you need to be good with SQL.

That’s the minimum you should achieve.

The more you know, the more employable — and valuable to your employer or clients — you will be.

Also, languages and platforms are tools. Some tools are more appropriate to some tasks than others.

That means sometimes Node.js is the preferred choice to meet the requirements, and sometimes Java is a better choice — after considering the inevitable trade-offs with every technical decision.  Source: Dave Voohis

Just one?

No, no, that’s not how it works.

To be a competent back-end developer, you need to know at least one of the major, core, back-end programming languages — Java (and its major frameworks, Spring and Hibernate) and/or C# (and its major frameworks, .NET Core and Entity Framework.)

You might want to have passing familiarity with the up-and-coming Go.

You need to know SQL. You can’t even begin to do back-end development without it. But don’t bother learning NoSQL tools until you need to use them.

You should be familiar with the major cloud platforms, AWS and Azure. Others you can pick up if and as needed.

Know Linux, because most back-end infrastructure runs on Linux and you’ll eventually encounter it, even if it’s often hived away into various cloud-based services.

You should know Python and bash scripts. Understand Apache Web Server configuration. Be familiar with Nginx, and if you’re using Java, have some understanding of how Apache Tomcat works.

Understand containerization. Be good with Docker.

Be familiar with JavaScript and HTML/CSS. You might not have to write them, but you’ll need to support front-end devs and work with them and understand what they do. If you do any Node.js (some of us do a lot, some do none), you’ll need to know JavaScript and/or TypeScript and understand Node.

That’ll do for a start.

But even more important than the above, learn computer science.

Learn it, and you’ll learn that programming languages are implementations of fundamental principles that don’t change, whilst programming languages come and go.

Learn those fundamental principles, and it won’t matter what languages are in the market — you’ll be able to pick up any of them as needed and use them productively. Source: Dave Voohis

It sounds like you’re spending too much time studying Python and not enough time writing Python.

The only way to become good at any programming language — and programming in general — is to practice writing code.

It’s like learning to play a musical instrument: Practice is essential.

Try to write simple programs that do simple things. When you get them to work, write more complex programs to do more complex things.

When you get stuck, read documentation, tutorials and other peoples’ code to help you get unstuck.

If you’re still stuck, set aside what you’re stuck on and work on a different program.

But keep writing code. Write a lot of code.

The more code you write, the easier it will become to write more code. Source: Dave Voohis

It depends on what you want to do.

If you want to just mess around with programming as a hobby, it’s fine. In fact, it’s pretty good. Since it’s “batteries included”, you can often get a lot done in just a few lines of code. Learn Python 3, not 2.

If you want to be a professional software engineer, Python’s a poor place to start. It’s syntax isn’t terrible, but it’s weird. It’s take on OO is different from almost all other OO languages. It’ll teach you bad habits that you’ll have to unlearn when switching to another language.

If you want to eventually be a professional software engineer, learn another OO language first. I prefer C#, but Java’s a great choice too. If you don’t care about OO, C is a great choice. Nearly all major languages inherited their syntax from C, so most other languages will look familiar if you start there.

C++ is a stretch these days. Learn another OO language first. You’ll probably eventually have to learn JavaScript, but don’t start there. It… just don’t.

So, ya. If you just want to do some hobby coding and write some short scripts and utilities, Python’s fine. If you want to eventually be a pro SE, look elsewhere. Source: Chris Nash

You master a language by using it, not just reading about it and memorizing trivia. You’ll pick up and internalize plenty of trivia anyway while getting real world work done.

Reading books and blogs and whatnot helps, but those are more meaningful if you have real world problems to apply the material to. Otherwise, much of it is likely to go into your eyeballs and ooze right back out of your ears, metaphorically speaking.

I usually don’t dig into all the low level details when reading a programming book, unless it’s specifically needed for a problem I am trying to solve. Or, it caught my curiosity, in which case, satisfying my curiosity is the problem I am trying to solve.

Once you learn the basics, use books and other resources to accelerate you on your journey. What to read, and when, will largely be driven by what you decide to work on.

Bjarne Stroustrup, the creator of C++, has this to say:

And no, I’m not a walking C++ dictionary. I do not keep every technical detail in my head at all times. If I did that, I would be a much poorer programmer. I do keep the main points straight in my head most of the time, and I do know where to find the details when I need them.

Source: Joe Zbiciak

Scale. There is no field other than software where a company can have 2 billion customers, and do it with only a few tens of thousands of employees. The only others that come close are petroleum and banking – both of which are also very highly paid. By David Seidman

Professional programmer’s code:

  • //Here we address strange issue that was seen on 
  • //production a few times, but is not reproduced  
  • //localy. User can be mysteriously logged out after 
  • //clicking Back button. This seems related to recent 
  • //changes to redirect scheme upon order confirmation. 
  • login(currentUser()); 

Average programmer’s code:

  • //Hotfix – don’t ask 
  • login(currentUser()); 

Professional programmer’s commit message:

  • Fix memory leak in connection pool 
 
  • We’ve seen connections leaking from the pool 
  • if any query had already been executed through 
  • it and then exception is thrown. 
  •  
  • The root causes was found in ConnectionPool.addExceptionHook() 
  • method that ignored certain types of exceptions. 

Average programmer’s commit message:

  • Small fix 

Professional programmer’s test naming:

  • login_shouldThrowUserNotFoundException_ifUserAbsentInDB() 
  • login_shouldSetCurrentUser_ifLoginSuccessfull() 
  • login_shouldRecordAuditMessage_uponUnsuccessfullLogin() 

Average programmer’s test naming:

  • testLogin1() 
  • testLogin2() 
  • testLogin3() 

After first few years of programming, when the urge to put some cool looking construct only you can understand into every block of code wears off, you’ll likely come to the conclusion that these examples are actually the code you want to encounter when opening a new project.

If we look at the apps written by good vs average programmers (not talking about total beginners) the code itself is not that much different, but if small conveniences everywhere allow you to avoid frustration while reading it – it is likely written by a professional.

The only valid measurement of code quality is the WTFs/minutes.

Here are 5 very common ones. If you don’t know these then you’re probably not ready.

  1. Graph Search – Depth-first and Breadth-first search
  2. Binary Search
  3. Backtracking using Recursion and Memoization
  4. Searching a Binary Search Tree
  5. Recursion over a Binary Tree

Of course, there are many others too.

Another thing to keep in mind – you won’t be asked these directly. It will be disguised as a unique situation.

source: quora

I worked as an academic in physics for about 10 years, and used Fortran for much of that time. I had to learn Fortran for the job, as I was already fluent in C/C++.

The prevalence of Fortran in computational physics comes down to three factors:

  1. Performance. Yes, Fortran code is typically faster than C/C++ code. One of the main reasons for this is that Fortran compilers are heavily optimised towards making fast code, and the Fortran language spec is designed such that compilers will know what to optimise. It’s possible to make your C program as fast as a Fortran one, but it’s considerably more work to do so.
  2. Convenience. Imagine you want to add a scalar to an array of values – this is the sort of thing we do all the time in physics. In C you’d either need to rely on an external library, or you’d need to write a function for this (leading to verbose code). In Fortran you just add them together, and the scalar is broadcasted across all elements of the array. You can do the same with multiplication and addition of two arrays as well. Fortran was originally the Formula-translator, and therefore makes math operations easy.
  3. Legacy. When you start a PhD, you’re often given some ex-post-doc’s (or professor’s) code as a starting point. Often times this code will be in Fortran (either because of the age of the person, or because they were given Fortran code). Unfortunately sometimes this code is F77, which means that we still have people in their 20s learning F77 (which I think is just wrong these days, as it gives Fortran as a whole a bad name). Source: Erlend Davidson

My friend, if you like C, you are gonna looooove B. B was C’s predecessor language. It’s a lot like C, but for C, Thompson and Ritchie added in data types. Basically, C is for lazy programmers. The only data type in B was determined by the size of a word on the host system. B was for “real-men programmers” who ate Hollerith cards for extra fiber, chewed iron into memory cores when they ran out of RAM, and dreamed in hexadecimal. Variables are evaluated contextually in B, and it doesn’t matter what the hell they contain; they are treated as though they hold integers in integer operations, and as though they hold memory addresses in pointer operations. Basically, B has all of the terseness of an assembly language, without all of the useful tooling that comes along with assembly.

As others indicate, pointers do not hold memory; they hold memory addresses. They are typed because before you go to that memory address, you probably want to know what’s there. Among other issues, how big is “there”? Should you read eight bits? Sixteen? Thirty-two? More? Inquiring minds want to know! Of course, it would also be nice to know whether the element at that address is an individual element or one element in an array, but C is for “slightly real less real men programmers” than B. Java does fully differentiate between scalars and arrays, and therefore is clearly for the weak minded. /jk Source: Joshua Gross

Hidden Features of C#

What are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know?

Here are the revealed features so far:

Keywords

Attributes

Syntax

Language Features

Visual Studio Features

Framework

Methods and Properties

  • String.IsNullOrEmpty() method by KiwiBastard
  • List.ForEach() method by KiwiBastard
  • BeginInvoke()EndInvoke() methods by Will Dean
  • Nullable<T>.HasValue and Nullable<T>.Value properties by Rismo
  • GetValueOrDefault method by John Sheehan

Tips & Tricks

  • Nice method for event handlers by Andreas H.R. Nilsson
  • Uppercase comparisons by John
  • Access anonymous types without reflection by dp
  • A quick way to lazily instantiate collection properties by Will
  • JavaScript-like anonymous inline-functions by roosteronacid

Other

  • netmodules by kokos
  • LINQBridge by Duncan Smart
  • Parallel Extensions by Joel Coehoorn
  • This isn’t C# per se, but I haven’t seen anyone who really uses System.IO.Path.Combine() to the extent that they should. In fact, the whole Path class is really useful, but no one uses it!
  • lambdas and type inference are underrated. Lambdas can have multiple statements and they double as a compatible delegate object automatically (just make sure the signature match) as in:
Console.CancelKeyPress +=
    (sender, e) => {
        Console.WriteLine("CTRL+C detected!\n");
        e.Cancel = true;
    };
  • From Rick Strahl: You can chain the ?? operator so that you can do a bunch of null comparisons.
string result = value1 ?? value2 ?? value3 ?? String.Empty;

When normalizing strings, it is highly recommended that you use ToUpperInvariant instead of ToLowerInvariant because Microsoft has optimized the code for performing uppercase comparisons.

I remember one time my coworker always changed strings to uppercase before comparing. I’ve always wondered why he does that because I feel it’s more “natural” to convert to lowercase first. After reading the book now I know why.

  • My favorite trick is using the null coalesce operator and parentheses to automagically instantiate collections for me.
private IList<Foo> _foo;

public IList<Foo> ListOfFoo 
    { get { return _foo ?? (_foo = new List<Foo>()); } }
  • Here are some interesting hidden C# features, in the form of undocumented C# keywords:
__makeref

__reftype

__refvalue

__arglist

These are undocumented C# keywords (even Visual Studio recognizes them!) that were added to for a more efficient boxing/unboxing prior to generics. They work in coordination with the System.TypedReference struct.

There’s also __arglist, which is used for variable length parameter lists.

One thing folks don’t know much about is System.WeakReference — a very useful class that keeps track of an object but still allows the garbage collector to collect it.

The most useful “hidden” feature would be the yield return keyword. It’s not really hidden, but a lot of folks don’t know about it. LINQ is built atop this; it allows for delay-executed queries by generating a state machine under the hood. Raymond Chen recently posted about the internal, gritty details.

  • Using @ for variable names that are keywords.
var @object = new object();
var @string = "";
var @if = IpsoFacto();
  • If you want to exit your program without calling any finally blocks or finalizers use FailFast:
Environment.FailFast()

Read more hidden C# Features at Hidden Features of C#? – Stack Overflow

Hidden Features of python

Source: stackoveflow

What IDE to Use for Python

Programming, Coding and Algorithms Questions and Answers

Acronyms used:

 L  - Linux
 W  - Windows
 M  - Mac
 C  - Commercial
 F  - Free
 CF - Commercial with Free limited edition
 ?  - To be confirmed

What is The right JSON content type?

For JSON text:

application/json

Example: { "Name": "Foo", "Id": 1234, "Rank": 7 }

For JSONP (runnable JavaScript) with callback:

application/javascript
Example: functionCall({"Name": "Foo", "Id": 1234, "Rank": 7});

Here are some blog posts that were mentioned in the relevant comments:

IANA has registered the official MIME Type for JSON as application/json.

When asked about why not text/json, Crockford seems to have said JSON is not really JavaScript nor text and also IANA was more likely to hand out application/* than text/*.

More resources:

JSON (JavaScript Object Notation) and JSONP (“JSON with padding”) formats seems to be very similar and therefore it might be very confusing which MIME type they should be using. Even though the formats are similar, there are some subtle differences between them.

So whenever in any doubts, I have a very simple approach (which works perfectly fine in most cases), namely, go and check corresponding RFC document.

JSON RFC 4627 (The application/json Media Type for JavaScript Object Notation (JSON)) is a specifications of JSON format. It says in section 6, that the MIME media type for JSON text is

application/json.

JSONP JSONP (“JSON with padding”) is handled different way than JSON, in a browser. JSONP is treated as a regular JavaScript script and therefore it should use application/javascript, the current official MIME type for JavaScript. In many cases, however, text/javascript MIME type will work fine too.

Note that text/javascript has been marked as obsolete by RFC 4329 (Scripting Media Types) document and it is recommended to use application/javascript type instead. However, due to legacy reasons, text/javascript is still widely used and it has cross-browser support (which is not always a case with application/javascript MIME type, especially with older browsers).

What are some mistakes to avoid while learning programming?

  1. Over use of the GOTO statement. Most schools teach this is a NO;NO
  2. Not commenting your code with proper documentation – what exactly does the code do??
  3. Endless LOOP. A structured loop that has NO EXIT point
  4. Overwriting memory – destroying data and/or code. Especially with Dynamic Allocation;Stacks;Queues
  5. Not following discipline – Requirements, Design, Code, Test, Implementation

Moreover complex code should have a BLUEPRINT – Design. That is like saying let’s build a house without a floor plan. Code/Programs that have a requirements and design specification BEFORE writing code tends to have a LOWER error rate. Less time debugging and fixing errors. Source: QUora

Lisp.

The thing that always struck me is that the best programmers I would meet or read all had a couple of things in common.

  1. They didn’t use IDEs, preferring Emacs or Vim.
  2. They all learned or used Functional Programming (Lisp, Haskel, Ocaml)
  3. They all wrote or endorsed some kind of testing, even if it’s just minimal TDD.
  4. They avoided fads and dependencies like a plague.

It is a basic truth that learning Lisp, or any functional programming, will fundamentally change the way you program and think about programming. Source: Quora

The two work well together. Both are effective at what they do:

  • Pairing is a continuous code review, with a human-powered ‘auto suggest’. If you like github copilot, pairing does that with a real brain behind it.
  • TDD forces you to think about how your code will be used early on in the process. That gives you the chance to code things so they are clear and easy to use

Both of these are ‘shift-left’ activities. In the days of old, code review and testing happened after the code was written. Design happened up front, but separate to coding, so you never got to see if the design was actually codeable properly. By shifting these activities to before the code gets written, we get a much faster feedback loop. That enables us to make corrections and improvements as we go.

Neither is better than each other. They target different parts of the coding challenge. By Alan Mellor

Yes, I’ve found that three can be very helpful, especially these days.

  • Monitor 1: IDE full screen
  • Monitor 2: Google, JIRA ticket, documentation. Manual Test tools
  • Monitor 3: Zoom/Teams/Slack/Outlook for general comms

That third monitor becomes almost essential if you are remote pairing, and wnat to see your collaborator n real-time.

My current work is teaching groups in our academy. That also benefits from three monitors: Presenter view, participant view, zoom for chat and hands ups in the group.

I can get away with two monitors. I can even do it with a £3 HDMI fake monitor USB plug. Neither is quite as effective. Source: Alan Mellor

You make the properties not different. And the key way to do that is by removing the properties completely.

Instead, you tell your objects to do some behaviour.

Say we have three classes full of different data that all needs adding to some report. Make an interface like this:

  • interface IReportSource { 
  • void includeIn( Report r ); 

so here, all your classes with different data will implement this interface. We can call the method ‘includeIn’ on each of them. We pass in a concrete class Report to that method. This will be the report that is being generated.

Then your first class which used to look like

  • class ALoadOfData { 
  • get; set; name 
  • get; set; quantity 

(forgive the rusty/pseudo C# syntax please)

can be translated into:

  • class ARealObject : IReportSource { 
  • private string name ; 
  • private int quantity ; 
  •  
  • void includeIn( Report r ) { 
  • r.addBasicItem( name, quantity ); 

You can see how the properties are no longer exposed. They remain encapsulated in the object, available for use inside our includeIn() method. That is now polymorphic, and you would write a custom includeIn() for each kind of class implementing IReportSource. It can then call a suitable method on the Report class, with a suitable number of properties (now hidden; so just fields). By Alan Mellor

What are the Top 20  lesser known but cool data structures?

1- Tries, also known as prefix-trees or crit-bit trees, have existed for over 40 years but are still relatively unknown. A very cool use of tries is described in “TRASH – A dynamic LC-trie and hash data structure“, which combines a trie with a hash function.

2- Bloom filter: Bit array of m bits, initially all set to 0.

To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.

To check if an item is in the set, compute the k indices and check if they are all set to 1.

Of course, this gives some probability of false-positives (according to wikipedia it’s about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.

Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.

3- Rope: It’s a string that allows for cheap prepends, substrings, middle insertions and appends. I’ve really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.

4- Skip lists are pretty neat.

Wikipedia
A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).

They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.

If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT’s Introduction to Algorithms lecture on them.

Also, here is a Java applet demonstrating Skip Lists visually.

5Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.

Bit Arrays store individual bits compactly and allow fast bit operations.

6-Zippers – derivatives of data structures that modify the structure to have a natural notion of ‘cursor’ — current location. These are really useful as they guarantee indicies cannot be out of bound — used, e.g. in the xmonad window manager to track which window has focused.

Amazingly, you can derive them by applying techniques from calculus to the type of the original data structure!

7- Suffix tries. Useful for almost all kinds of string searching (http://en.wikipedia.org/wiki/Suffix_trie#Functionality). See also suffix arrays; they’re not quite as fast as suffix trees, but a whole lot smaller.

8- Splay trees (as mentioned above). The reason they are cool is threefold:

    • They are small: you only need the left and right pointers like you do in any binary tree (no node-color or size information needs to be stored)
    • They are (comparatively) very easy to implement
    • They offer optimal amortized complexity for a whole host of “measurement criteria” (log n lookup time being the one everybody knows). See http://en.wikipedia.org/wiki/Splay_tree#Performance_theorems

9- Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it’s a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it’s not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.

10- A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here’s how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v’s neighbor list and for v in u’s neighbor list. Both have size at most 6, so this is O(1).

By the above algorithm, if u and v are neighbors, you won’t have both u in v’s list and v in u’s list. If you need this, just add each node’s missing neighbors to that node’s neighbor list, but store how much of the neighbor list you need to look through for fast lookup.

11-Lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked.
They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.

Here’s some links
http://www.cl.cam.ac.uk/research/srg/netos/lock-free/
http://www.research.ibm.com/people/m/michael/podc-1996.pdf [Links to PDF]
http://www.boyet.com/Articles/LockfreeStack.html

Mike Acton’s (often provocative) blog has some excellent articles on lock-free design and approaches

12- I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan’s Function, if I recall my data structures class correctly).

13- Fibonacci heaps

They’re used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra’s algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.

14- Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it’s the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.

Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.

In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.

Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.

15- Huffman trees – used for compression.

16- Have a look at Finger Trees, especially if you’re a fan of the previously mentioned purely functional data structures. They’re a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.

As per the original article:

Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.

A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.

17- Circular or ring buffer– used for streaming, among other things.

18- I’m surprised no one has mentioned Merkle trees (ie. Hash Trees).

Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.

19- <zvrba> Van Emde-Boas trees

I think it’d be useful to know why they’re cool. In general, the question “why” is the most important to ask 😉

My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.

20- An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.

Honorable mentions: splay trees, Cuckoo Hashing, min-max heap,  Cache Oblivious datastructures, Left Leaning Red-Black Trees, Work Stealing Queue, Bootstrapped skew-binomial heaps , Kd-Trees, MX-CIF Quadtrees, HAMT, Inverted Index, Fenwick Tree, Ball Tress, Van Emde-Boas trees. Nested sets , half-edge data structure , Scapegoat trees, unrolled linked list, 2-3 Finger Trees, Pairing heaps , Interval Trees, XOR Linked List, Binary decision diagram, The Region Quadtree, treaps, Counted unsorted balanced btrees, Arne Andersson trees , DAWGs , BK-Trees, or Burkhard-Keller TreesZobrist Hashing, Persistent Data Structures, B* tree, Deletable Bloom Filters (DlBF)

Ring-Buffer, Skip lists, Priority deque, Ternary Search Tree, FM-index, PQ-Trees, sparse matrix data structures, Delta list/delta queue, Bucket Brigade, Burrows–Wheeler transform , corner-stitched data structure. Disjoint Set Forests, Binomial heap, Cycle Sort 

Variable names in languages like Python are not bound to storage locations until run time. That means you have to look up each name to find out what storage it is bound to and what its type is before you can apply an operation like “+” to it. In C++, names are bound to storage at compile time, so no lookup is needed, and the type is fixed at compile time so the compiler can generate machine code with no overhead for interpretation. Late-bound languages will never be as fast as languages bound at compile time.

You could make a language that looks kinda like Python that is compile-time bound and statically typed. You could incrementally compile such a language. But you can also build an environment that incrementally compiles C++ so it would feel a lot like using Python. Try godbolt or tutorialspoint if you want to see this actually working for small programs. 

Source: quora

Have I got good news for you! No one has ever asked me my IQ, nor have I ever asked anyone for their IQ. This was true when I was a software engineer, and is true now that I’m a computer scientist.

Try to learn to program. If you can learn in an appropriate environment (a class with a good instructor), go from there. If you fail the first time, adjust your learning approach and try again. If you still can’t, find another future; you probably wouldn’t like computer programming, anyway. If you learn later, that’s fine. 

Source: Here

Beginners to C++ will consistently struggle with getting a C++ program off the ground. Even “Hello World” can be a challenge. Making a GUI in C++ from scratch? Almost impossible in the beginning.

These 4 areas cannot be learned by any beginner to C++ in 1 day or even 1 month in most cases. These areas challenge nearly all beginners and I have seen cases where it can take a few months to teach.

These are the most fundamental things you need to be able to do to build and produce a program in C++.

Basic Challenge #1: Creating a Program File

  1. Compiling and linking, even in an IDE.
  2. Project settings in an IDE for C++ projects.
  3. Make files, scripts, environment variables affecting compilation.

Basic Challenge #2: Using Other People’s C++ Code

  1. Going outside the STL and using libraries.
  2. Proper library paths in source, file path during compile.
  3. Static versus dynamic libraries during linking.
  4. Symbol reference resolution.

Basic Challenge #3: Troubleshooting Code

  1. Deciphering compiler error messages.
  2. Deciphering linker error messages.
  3. Resolving segmentation faults.

Basic Challenge #4: Actual C++ Code

  1. Writing excellent if/loop/case/assign/call statements.
  2. Managing header/implementation files consistently.
  3. Rigorously avoiding name collisions while staying productive.
  4. Various forms of function callback, especially in GUIs.

How do you explain them?

You cannot explain any of them in a way that most persons will pick up right away. You can describe these things by way of analogy, you can even have learners mirror you at the same time you demonstrate them. I’ve done similar things with trainees in a work setting. In the end, it usually requires time on the order of months and years to pick up these things.

More at C++ the Basic Way – UI and Command-Line

As a professional compiler writer and a student of computers languages and computer architecture this question needs a deeper analysis.

I would proposed the following taxonomy:

1. Assembly code,

2. Implementation languages,

3. Low Level languages and

4. High Level Languages.

Assembly code is where one-for-one translation between source and code.

Macro processors were invented to improve productivity. But to debug a one-for-one listing is needed. The next questions is “What is the hardest Assembly code?” I would vote for the x86–32. It is a very byzantine architecture with a number of mistakes and miss steps. Fortunately the x86–64 cleans up many of these errors.

Implementation languages are languages that are architecture specific but allow a more statement like expression.

There is no “semantic gap” between Assembly code and the machine. Bliss, PL360, and at the first versions of C were in this category. They required the same understanding of the machine as assembly without the pain of assembly. These are hard languages. The semantic gap is only one of syntax.

Next are the Low Level Languages.

Modern “c” firmly fits here. These are languages who’s design was molded about the limitations of computer architecture. FORTRAN, C, Pascal, and Basic are archetypes of these languages. These are easier to learn and use than Assembly and Implementation language. They all have a “Run Time Library” that maintain an execution environment.

As a note, LISP has some syntax, CAR and CDR, which are left over from the IBM 704 it was first implemented on.

Last are the “High Level Languages”.

Languages that require extensive runtime environment to support. Except for Algol, require a “garbage collector” for efficient memory support. The languages are: Algol, SNOBOL4, LISP (and it variants), Java, Smalltalk, Python, Ruby, and Prolog.

Which of these is hardest? I would vote for Prolog with LISP being second. Why? The logical process of “Resolution” has taken me some time learn. Mastery is a long ways away. It is harder than Assembly code? Yes and no. I would never attempt a problem I use Prolog for in Assembly. The order of effort is too big. I find I spend hours writing 20 lines of Prolog which replaces hundreds of lines of SNOBOL4. LISP can be hard unless you have intelligent editors and other tools. In one sense LISP is an “assembly language for an AI machine” and Prolog is “assembly language for a logic machine.” Both Prolog and LISP are very powerful languages. I find it takes deep mental effort to write code in both. But code does wonderful things!

What and where are the stack and the heap?

  • Where and what are they (physically in a real computer’s memory)?
  • To what extent are they controlled by the OS or language run-time?
  • What is their scope?
  • What determines the size of each of them?
  • What makes one faster?

The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.

The heap is memory set aside for dynamic allocation. Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.

Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).

To answer your questions directly:

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

What is their scope?

The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.

What determines the size of each of them?

The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.

A clear demonstration: 
Image source: vikashazrati.wordpress.com

Stack:

  • Stored in computer RAM just like the heap.
  • Variables created on the stack will go out of scope and are automatically deallocated.
  • Much faster to allocate in comparison to variables on the heap.
  • Implemented with an actual stack data structure.
  • Stores local data, return addresses, used for parameter passing.
  • Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
  • Data created on the stack can be used without pointers.
  • You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
  • Usually has a maximum size already determined when your program starts.

Heap:

  • Stored in computer RAM just like the stack.
  • In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with deletedelete[], or free.
  • Slower to allocate in comparison to variables on the stack.
  • Used on demand to allocate a block of data for use by the program.
  • Can have fragmentation when there are a lot of allocations and deallocations.
  • In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
  • Can have allocation failures if too big of a buffer is requested to be allocated.
  • You would use the heap if you don’t know exactly how much data you will need at run time or if you need to allocate a lot of data.
  • Responsible for memory leaks.

Example:

int foo()
{
  char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
  bool b = true; // Allocated on the stack.
  if(b)
  {
    //Create 500 bytes on the stack
    char buffer[500];

    //Create 500 bytes on the heap
    pBuffer = new char[500];

   }//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;

he most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.

  • In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).

    Stack like a stack of papers

    The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.

  • In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear ‘top’ item.

    Heap like a heap of licorice allsorts

    Heap allocation requires maintaining a full record of what memory is allocated and what isn’t, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting – identifying at runtime when memory is no longer in scope and deallocating it.

These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!

  • To what extent are they controlled by the OS or language runtime?

    As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn’t really have runtime control over it; it’s determined by the programming language, OS and even the system architecture.

    A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.

  • What is their scope?

    The call stack is such a low level concept that it doesn’t relate to ‘scope’ in the sense of programming. If you disassemble some code you’ll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you’d expect it to work given how your programming languages work. In a heap, it’s also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a “scope” is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).

  • What determines the size of each of them?

    Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don’t store huge chunks of data on the stack, so it’ll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, “stack overflow”) or other unusual programming decisions.

    A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don’t normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn’t use memory that you haven’t allocated yet or memory that you have freed.

  • What makes one faster?

    The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What’s more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.

  • Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
  • In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.

The heap

  • The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block.
  • As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
  • Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
  • When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.

The heap

The stack

  • The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
  • The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don’t be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
  • If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
  • When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
  • When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
  • Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
  • As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.

The stack

Can a function be allocated on the heap instead of a stack?

No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.

How the heap is managed is really up to the runtime environment. C uses malloc and C++ uses new, but many other languages have garbage collection.

However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn’t too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.

In the following C# code

public void Method1()
{
    int i = 4;
    int y = 2;
    class1 cls1 = new class1();
}

Here’s how the memory is managed

Picture of variables on the stack

Local Variables that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don’t really know up front but we expect them to last a while. In most languages it’s critical that we know at compile time how large a variable is if we want to store it on the stack.

Objects (which vary in size as we update them) go on the heap because we don’t know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.

In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you’re not dealing with pointers.

More information can be found here:

The difference between stack and heap memory allocation « timmurphy.org

and here:

Creating Objects on the Stack and Heap

This article is the source of picture above: Six important .NET concepts: Stack, heap, value types, reference types, boxing, and unboxing – CodeProject

but be aware it may contain some inaccuracies.

The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.

Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).

The Heap The heap is a generic name for where you put the data that you create on the fly. If you don’t know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.

Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are – memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).

Implementation Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.

This is only practical if your memory usage is quite different from the norm – i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.

Physical location in memory This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implementation specific) and frankly not important.

In Short

A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer’s RAM.


In Detail

The Stack

The stack is a “LIFO” (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is “pushed” onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.

The advantage of using the stack to store variables, is that memory is managed for you. You don’t have to allocate memory by hand, or free it once you don’t need it any more. What’s more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.

More can be found here.


The Heap

The heap is a region of your computer’s memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don’t need it any more.

If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won’t be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.

Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.

Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.

More can be found here.


Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.

Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.

Enter image description here

You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.

In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.

Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).

Enter image description here

At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.

Even, more detail is given here and here.


Now come to your question’s answers.

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

More can be found here.

What is their scope?

Already given in top.

“You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.”

More can be found in here.

What determines the size of each of them?

The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.

Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.

Details can be found from here.

How do you stop scripters from slamming your website hundreds of times a second?

How about implementing something like SO does with the CAPTCHAs?

If you’re using the site normally, you’ll probably never see one. If you happen to reload the same page too often, post successive comments too quickly, or something else that triggers an alarm, make them prove they’re human. In your case, this would probably be constant reloads of the same page, following every link on a page quickly, or filling in an order form too fast to be human.

If they fail the check x times in a row (say, 2 or 3), give that IP a timeout or other such measure. Then at the end of the timeout, dump them back to the check again.


Since you have unregistered users accessing the site, you do have only IPs to go on. You can issue sessions to each browser and track that way if you wish. And, of course, throw up a human-check if too many sessions are being (re-)created in succession (in case a bot keeps deleting the cookie).

As far as catching too many innocents, you can put up a disclaimer on the human-check page: “This page may also appear if too many anonymous users are viewing our site from the same location. We encourage you to register or login to avoid this.” (Adjust the wording appropriately.)

Besides, what are the odds that X people are loading the same page(s) at the same time from one IP? If they’re high, maybe you need a different trigger mechanism for your bot alarm.


Edit: Another option is if they fail too many times, and you’re confident about the product’s demand, to block them and make them personally CALL you to remove the block.

Having people call does seem like an asinine measure, but it makes sure there’s a human somewhere behind the computer. The key is to have the block only be in place for a condition which should almost never happen unless it’s a bot (e.g. fail the check multiple times in a row). Then it FORCES human interaction – to pick up the phone.

In response to the comment of having them call me, there’s obviously that tradeoff here. Are you worried enough about ensuring your users are human to accept a couple phone calls when they go on sale? If I were so concerned about a product getting to human users, I’d have to make this decision, perhaps sacrificing a (small) bit of my time in the process.

Since it seems like you’re determined to not let bots get the upper hand/slam your site, I believe the phone may be a good option. Since I don’t make a profit off your product, I have no interest in receiving these calls. Were you to share some of that profit, however, I may become interested. As this is your product, you have to decide how much you care and implement accordingly.


The other ways of releasing the block just aren’t as effective: a timeout (but they’d get to slam your site again after, rinse-repeat), a long timeout (if it was really a human trying to buy your product, they’d be SOL and punished for failing the check), email (easily done by bots), fax (same), or snail mail (takes too long).

You could, of course, instead have the timeout period increase per IP for each time they get a timeout. Just make sure you’re not punishing true humans inadvertently.

The unsatisfying answer: Nearly every C++ compiler can output assembly language,* so assembly language can be exactly the same speed as C++ if you use C++ to develop the assembly code.

The more interesting answer: It’s highly unlikely that an application written entirely in assembly language remains faster than the same application written in C++ over the long run, even in the unlikely case it starts out faster.

Repeat after me: Assembly Language Isn’t Magic™.

For the nitty gritty details, I’ll just point you to some previous answers I’ve written, as well as some related questions, and at the end, an excellent answer from Christopher Clark:

Performance optimization strategies as a last resort

Let’s assume:

  • the code already is working correctly
  • the algorithms chosen are already optimal for the circumstances of the problem
  • the code has been measured, and the offending routines have been isolated
  • all attempts to optimize will also be measured to ensure they do not make matters worse

OK, you’re defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:

  • The first problem found was use of list clusters (now called “iterators” and “container classes”) accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds.

  • Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds.

  • Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.

Now I seem to have hit a wall. The samples are telling me exactly what it is doing, but I can’t seem to find anything that I can improve. Then I reflect on the basic design of the program, on its transaction-driven structure, and ask if all the list-searching that it is doing is actually mandated by the requirements of the problem.

Then I hit upon a re-design, where the program code is actually generated (via preprocessor macros) from a smaller set of source, and in which the program is not constantly figuring out things that the programmer knows are fairly predictable. In other words, don’t “interpret” the sequence of things to do, “compile” it.

  • That redesign is done, shrinking the source code by a factor of 4, and the time is reduced to 10 seconds.

Now, because it’s getting so quick, it’s hard to sample, so I give it 10 times as much work to do, but the following times are based on the original workload.

  • More diagnosis reveals that it is spending time in queue-management. In-lining these reduces the time to 7 seconds.

  • Now a big time-taker is the diagnostic printing I had been doing. Flush that – 4 seconds.

  • Now the biggest time-takers are calls to malloc and free. Recycle objects – 2.6 seconds.

  • Continuing to sample, I still find operations that are not strictly necessary – 1.1 seconds.

Total speedup factor: 43.6

Now no two programs are alike, but in non-toy software I’ve always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i or i++ or for(;;) or while(1) are faster: the kinds of questions I see so often on Stack Overflow.

P.S. It may be wondered why I didn’t use a profiler. The answer is that almost every one of these “problems” was a function call site, which stack samples pinpoint. Profilers, even today, are just barely coming around to the idea that statements and call instructions are more important to locate, and easier to fix, than whole functions.

I actually built a profiler to do this, but for a real down-and-dirty intimacy with what the code is doing, there’s no substitute for getting your fingers right in it. It is not an issue that the number of samples is small, because none of the problems being found are so tiny that they are easily missed.

ADDED: jerryjvl requested some examples. Here is the first problem. It consists of a small number of separate lines of code, together taking over half the time:

 /* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)

These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with “information hiding” meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a “bottleneck” (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they “should have been avoided” (in hindsight). I hope that gives a bit of the flavor.

Here is the second problem, in two separate lines:

 /* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)

These are building lists by appending items to their ends. (The fix was to collect the items in arrays, and build the lists all at once.) The interesting thing is that these statements only cost (i.e. were on the call stack) 3/48 of the original time, so they were not in fact a big problem at the beginning. However, after removing the first problem, they cost 3/20 of the time and so were now a “bigger fish”. In general, that’s how it goes.

I might add that this project was distilled from a real project I helped on. In that project, the performance problems were far more dramatic (as were the speedups), such as calling a database-access routine within an inner loop to see if a task was finished.

REFERENCE ADDED: The source code, both original and redesigned, can be found in www.ddj.com, for 1993, in file 9311.zip, files slug.asc and slug.zip.

EDIT 2011/11/26: There is now a SourceForge project containing source code in Visual C++ and a blow-by-blow description of how it was tuned. It only goes through the first half of the scenario described above, and it doesn’t follow exactly the same sequence, but still gets a 2-3 order of magnitude speedup.

Suggestions:

  • Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead.
    Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory.
  • Don’t use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it.
    Down-sides: writing additional code means more surface area for bugs.
  • Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!)
  • Cheat: in some cases although an exact calculation may exist for your problem, you may not need ‘exact’, sometimes an approximation may be ‘good enough’ and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%?
    Down-sides: Well… the answer won’t be exact.

When you can’t improve the performance any more – see if you can improve the perceived performance instead.

You may not be able to make your fooCalc algorithm faster, but often there are ways to make your application seem more responsive to the user.

A few examples:

  • anticipating what the user is going to request and start working on that before then
  • displaying results as they come in, instead of all at once at the end
  • Accurate progress meter

These won’t make your program faster, but it might make your users happier with the speed you have.

I spend most of my life in just this place. The broad strokes are to run your profiler and get it to record:

  • Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
  • Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing.
  • Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
  • Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
  • Sequential floating-point ops. Make these SIMD.

And one more thing I like to do:

  • Set your compiler to output assembly listings and look at what it emits for the hotspot functions in your code. All those clever optimizations that “a good compiler should be able to do for you automatically”? Chances are your actual compiler doesn’t do them. I’ve seen GCC emit truly WTF code.

More suggestions:

  • Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need.

  • Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking).

  • Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done.

  • Threaded I/O: For those daring enough, combine ‘I/O up-front’ or ‘Delay I/O’ with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.

I love all the

  1. graph algorithms in particular the Bellman Ford Algorithm
  2. Scheduling algorithms the Round-Robin scheduling algorithm in particular.
  3. Dynamic Programming algorithms the Knapsack fractional algorithm in particular.
  4. Backtracking algorithms the 8-Queens algorithm in particular.
  5. Greedy algorithms the Knapsack 0/1 algorithm in particular.

We use all these algorithms in our daily life in various forms at various places.

For example every shopkeeper applies anyone or more of the several scheduling algorithms to service his customers. Depending upon his service policy and situation. No one of the scheduling algorithm fits all the situations.

All of us mentally apply one of the graph algorithms when we plan the shortest route to be taken when we go out for doing multiple things in one trip.

All of us apply one of the Greedy algorithms while selecting career, job, girlfriends, friends etc.

All of us apply one of the Dynamic programming algorithms when we do simple multiplication mentally by referring to the various mathematical products table in our memory.

How much faster is C compared to Python?

Top 7 Most Popular Programming Languages (Most Used High Level List)

It uses TimSort, a sort algorithm which was invented by Tim Peters, and is now used in other languages such as Java.

TimSort is a complex algorithm which uses the best of many other algorithms, and has the advantage of being stable – in others words if two elements A & B are in the order A then B before the sort algorithm and those elements test equal during the sort, then the algorithm Guarantees that the result will maintain that A then B ordering.

That does mean for example if you want to say order a set of student scores by score and then name (so equal scores are ordered already alphabetically) then you can sort by name and then sort by score.

TimSort has good performance against data sets which are partially sorted or already sorted (areas where some other algorithms struggle).

 
 
Timsort – Wikipedia
Timsort was designed to take advantage of runs of consecutive ordered elements that already exist in most real-world data, natural runs . It iterates over the data collecting elements into runs and simultaneously putting those runs in a stack. Whenever the runs on the top of the stack match a merge criterion , they are merged. This goes on until all data is traversed; then, all runs are merged two at a time and only one sorted run remains. 

Run Your Python Code Online Here



I’m currently coding a SAT solver algorithm that will have to take millions of input data, and I was wondering if I should switch from Python to C.

Answer: Using best-of-class equivalent algorithms optimized compiled C code is often multiple orders of magnitude faster than Python code interpreted by CPython (the main Python implementation). Other Python implementations (like PyPy) might be a bit better, but not vastly so. Some computations fit Python better, but I have a feeling that a SAT solver implementation will not be competitive if written using Python.

All that said, do you need to write a new implementation? Could you use one of the excellent ones out there? CDCL implementations often do a good job, and there are various open-source ones readily available (e.g., this one: https://github.com/togatoga/togasat

Comments:

1- I mean, also it depends. I recall seeing an analysis some time ago, that showed CPython can be as fast as C … provided you are almost exclusively using library functions written in C. That being said, for any non-trivial python program it will probably be the case that you must spend quite a bit of time in the interpreter, and not in C library functions.

Why Are There So Many Programming Languages? | Juniors Coders
Popular programming languages

The other answers are mistaken. This is a very common confusion. They describe statically typed language, not strongly typed language. There is a big difference.

Strongly typed vs weakly typed:

In strongly typed languages you get an error if the types do not match in an expression. It does not matter if the type is determined at compile time (static types) or runtime (dynamic types).

Both java and python are strongly typed. In both languages, you get an error if you try to add objects with unmatching types. For example, in python, you get an error if you try to add a number and a string:

  • >>> a = 10 
  • >>> b = “hello” 
  • >>> a + b 
  • Traceback (most recent call last): 
  • File “<stdin>”, line 1, in <module> 
  • TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’ 

In Python, you get this error at runtime. In Java, you would get a similar error at compile time. Most statically typed languages are also strongly typed.

The opposite of strongly typed language is weakly typed. In a weakly typed language, there are implicit type conversions. Instead of giving you an error, it will convert one of the values automatically and produce a result, even if such conversion loses data. This often leads to unexpected and unpredictable behavior.

Javascript is an example of a weakly typed language.

  • > let a = 10 
  • > let b = “hello” 
  • > a + b 
  • ’10hello’ 

Instead of an error, JavaScript will convert a to string and then concatenate the strings.

Static types vs dynamic types:

In a statically typed language, variables are bound types and may only hold data of that type. Typically you declare variables and specify the type of data that the variable has. In some languages, the type can be deduced from what you assign to it, but it still holds that the variable is bound to that type. For example, in java:

  • int a = 3; 
  • a = “hello” // Error, a can only contain integers 

in a dynamically typed language, variables may hold any type of data. The type of the data is simply determined by what gets assigned to the variable at runtime. Python is dynamically typed, for example:

  • a = 10 
  • a = “hello” 
  • # no problem, a first held an integer and then a string 

Comments:

#1: Don’t confuse strongly typed with statically typed.

Python is dynamically typed and strongly typed.
Javascript is dynamically typed and weakly typed.
Java is statically typed and strongly typed.
C is statically typed and weekly typed.

See these articles for a longer explanation:
Magic lies here – Statically vs Dynamically Typed Languages
Key differences between mainly used languages for data science

I also added a drawing that illustrates how strong and static typing relate to each other:

Python is dynamically typed because types are determined at runtime. The opposite of dynamically typed is statically typed (not strongly typed)

Python is strongly typed because it will give errors when types don’t match instead of performing implicit conversion. The opposite of strongly typed is weakly typed

Python is strongly typed and dynamically typed

What is the difference between finalize() and destructor in Java?

Finalize() is not guaranteed to be called and the programmer has no control over what time or in what order finalizers are called.

They are useless and should be ignored.

A destructor is not part of Java. It is a C++ language feature with very precise definitions of when it will be called.

Comments:

1- Until we got to languages like Rust (with the Drop trait) and a few others was C++ the only language which had the destructor as a concept? I feel like other languages were inspired from that.

2- Many others manage memory for you, even predating C: COBOL, FORTRAN and so on. That’s another driver why there isn’t much attention to destructors

What are some ways to avoid writing static helper classes in Java?

Mainly getting out of that procedural ‘function operates on parameters passed in’ mindset.

Tactically, the static can normally be moved onto one of the parameter objects. Or all the parameters become an object that the static moves to. A new object might be needed. Once done the static is now a fully fledged method on an object and is not static anymore.

I view this as a positive iterative step in discovering objects for a system.

For cases where a static makes sense (? none come to mind) then a good practice is to move it closer to where it is used either in the same package or on a class that is strongly related.

I avoid having global ‘Utils’ classes full of statics that are unrelated. That’s fairly basic design, keeping unrelated things separate. In this case, the SOLID ISP principle applies: segregate into smaller, more focused interfaces.

Is there any programming language as easy as python and as fast and efficient as C++, if yes why it’s not used very often instead of C or C++ in low level programming like embedded systems, AAA 2D and 3D video games, or robotic?

Not really. I use Python occasionally for “quick hacks” – programs that I’ll probably run once and then delete – also, because I use “blender” for 3D modeling and Python is it’s scripting language.

I used to write quite a bit of JavaScript for web programming but since WASM came along and allows me to run C++ at very nearly full speed inside a web browser, I write almost zero JavaScript these days.

I use C++ for almost everything.

Once you get to know C++ it’s no harder than Python – the main thing I find great about Python is the number of easy-to-find libraries.

But in AAA games – the poor performance of Python pretty much rules it out.

In embedded systems, the computer is generally too small to fit a Python interpreter into memory – so C or C++ is a more likely choice.

This was actually one of the interview questions I got when I applied at Google.

“Write a function that returns the average of two number.”

So I did, they way you would expect. (x+y)/2. I did it as a C++ template so it works for any kind of number.

interviewer: “What’s wrong with it?”

Well, I suppose there could be an overflow if adding the two numbers requires more than space than the numeric type can hold. So I rewrote it as (x/2) + (y/2).

interviewer: “What’s wrong with it now?”

Well, I think we are losing a little precision by pre-dividing. So I wrote it another way.

interviewer: “What’s wrong with it now?”

And that went on for about 10 minutes. It ended with us talking about the heat death of the universe.

I got the job and ended up working with the guy. He said he had never done that before. He had just wanted to see what would happen.

Comments:

1-

The big problem you get with x/2 + y/2 is that it can/will give incorrect answers for integer inputs. For example, let’s average 3 and 3. The result should obviously be 3.

But with integer division, 3/2 = 1, and 1+1 = 2.

You need to add one to the result if and only if both inputs are odd.

2- Here’s what I’d do in C++ for integers, which I believe does the right thing including getting the rounding direction correct, and it can likely be made into a template that will do the right thing as well. This is not complete code, but I believe it gets the details correct…

Programming - Find the average of 2 numbers
Programming – Find the average of 2 numbers

That will work for any signed or unsigned integer type for op1 and op2 as long as they have the same type.

If you want it to do something intelligently where one of the operands is an unsigned type and the other one is a signed type, you could do it, but you need to define exactly what should happen, and realize that it’s quite likely that for maximum arithmetic correctness, the output type may need to be different than either input type. For instance, the average of a uint32_t and an int32_t can be too large to fit in an int32_t, and it can also be too small to fit in a uint32_t, so you probably need to go with a larger signed integer type, maybe int64_t.

3- I would have answered the question with a question, “Tell me more about the input, error handling capability of your system, and is this typical of the level of challenge here at google?” Then I’d provide eye contact, sit back, and see what happens. Years ago I had an interview question that asked what classical problem was part of a pen plotter control system. I told the interviewer that it was TSP but that if you had to change pens, you had to consider how much time it took to switch. They offered me a job but I declined given the poor financial condition of the company (SGI) which I discovered by asking the interviewer questions of my own. IMO: questions are at the heart of engineering. The interviewer, if they are smart, wants to see if you are capable of discovering the true nature of their problems. The best programmers I’ve ever worked with were able to get to the heart of problems and trade off solutions. Coding is a small part of the required skills.

Yes, they can.

There are features in HTTP to allow many different web sites to be served on a single IP address.

You can, if you are careful, assign the same IP address to many machines (it typically can’t be their only IP address, however, as distinguishable addresses make them much easier to manage).

You can run arbitrary server tasks on your many machines with the same IP address if you have some way of sending client connections to the correct machine. Obviously that can’t be the IP address, because they’re all the same. But there are ways.

However… this needs to be carefully planned. There are many issues. Andrew Mc Gregor

It depends on how you want to store and access data.

For the most part, as a general concept, old school cryptography is obsolete.

It was based on ciphers, which were based on it being mathematically “hard” to crack.

If you can throw a compute cluster at DES, even with a one byte “salt”, it’s pretty easy to crack a password database in seconds. Minutes, if your cluster is small.

Almost all computer security is base on big number theory. Today, that’s called: Law of large numbers – Wikipedia

Averages of repeated trials converge to the expected value An illustration of the law of large numbers using a particular run of rolls of a single die . As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. Although each run would show a distinctive shape over a small number of throws (at the left), over a large number of rolls (to the right) the shapes would be extremely similar. In probability theory , the law of large numbers ( LLN ) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. [1] The LLN is important because it guarantees stable long-term results for the averages of some random events. 
 

What it means is that it’s hard to do math on very large numbers, and so if you have a large one, the larger the better.

Most cryptography today is based on elliptic curves.

But we know by the proof of Fermat’s last theorem, and specifically, the Taniyama-Shimura conjecture, is that all elliptic curves have modular forms.

And so this gives us an attack at all modern cryptogrphay, using graphical mathematics.

It’s an interesting field, and problem space.

Not one I’m interested in solving, since I’m sure it has already been solved by my “associates” who now work for the NSA.

I am only interested in new problems.

Comments:

1- Sorry, but this is just wrong. “Almost all cryptography,” counted by number of bytes encrypted and decrypted, uses AES. AES does not use “large numbers,” elliptic curves, or anything of that sort – it’s essentially combinatorial in nature, with a lot of bit-diddling – though there is some group theory at its based. The same can be said about cryptographic checksums such as the SHA series, including the latest “sponge” constructions.

Where RSA and elliptic curves and such come in is public key cryptography. This is important in setting up connections, but for multiple reasons (performance – but also for excellent cryptographic reasons) is not use for bulk encryption. There are related algorithms like Diffie-Hellman and some signature protocols like DSS. All of these “use large numbers” in some sense, but even that’s pushing it – elliptic curve cryptography involves doing math over … points on an elliptic curve, which does lead you to do some arithmetic, but the big advantage of elliptic curves is that the numbers are way, way smaller than for, say, RSA for equivalent security.

Much research these days is on “post-quantum cryptography” – cryptography that is secure against attacks by quantum computers (assuming we ever make those work). These tend not to be based on “arithmetic” in any straightforward sense – the ones that seem to be at the forefront these days are based on computation over lattices.

Cracking a password database that uses DES is so far away from what cryptography today is about that it’s not even related. Yes, the original Unix implementations – almost 50 years ago – used that approach. So?

C++ lambda functions are syntactic sugar for a longstanding set of practices in both C and C++: passing a function as an argument to another function, and possibly connecting a little bit of state to it.

This goes way back. Look at C’s qsort():

C++ Function example

That last argument is a function pointer to a comparison function. You could use a captureless lambda for the same purpose in modern C++.

Sometimes, you want to tack a little bit of extra state alongside the function. In C, one way to do this is to provide an additional context pointer alongside the the function pointer. The context pointer will get passed back to the function as an argument.

I give an extended example in here:

In C++, that context pointer can be this. When you do that, you have something called a function object. (Side note: function objects were sometimes called functors; however, functors aren’t really the same thing.)

If you overload the function call operator for a particular class, then objects of that class behave as function objects. That is, you can pretend like the object is a function by putting parentheses and an argument list after the name of an instance! When you arrive at the overloaded operator implementation, this will point at the instance.

Instances of this class will add an offset to an integer. The function call operator is operator() below.

and to use it:

C++ Class Offset

That’ll print out the numbers 42, 43, 44, … 51 on separate lines.

And tying this back to the qsort() example from earlier: C++’s std::sort can take a function object for its comparison operator.

Modern C++’s lambda functions are syntactic sugar for function objects. They declare a class with an unutterable name, and then give you an instance of that class. Under the hood, the class’ constructor implements the capture, and initializes any state variables.

Other languages have similar constructs. I believe this one originated in LISP. It goes waaaay back.

As for any challenges associated with them: lifetime management. You potentially introduce a non-nested lifetime for any state associated with the callback, function object, or lambda.

If it’s all self contained (i.e. it keeps its own copies of everything), you’re less likely to have a problem. It owns all the state it relies on.

If it has non-owning pointers or references to other objects, you need to ensure the lifetime of your callback/function object/lambda remains within the lifetime of that other non-owned object. If that non-owned object’s lifetime isn’t naturally a superset of the callback/function object/lambda, you should consider taking a copy of that object, or reconsider your design.

Each one has specific strengths in terms of syntax features.

But the way to look at this is that all three are general purpose programming languages. You can write pretty much anything in them.

Trying to rank these languages in some kind of absolute hierarchy makes no sense and only leads to tribal ‘fanboi’ arguments.

If you need part of your code to talk to hardware, or could benefit from taking control of memory management, C++ is my choice.

General web service stuff, Java has an edge due to familiarity.

Anything involving a pre existing Microsoft component – eg data in SQL server, Azure – I will go all in on C#

I see more similarity than difference overall

Visual Studio Code is OK if you can’t find anything better for the language you’re using. There are better alternatives for most popular languages.

C# – Use Visual Studio Community, it’s free, and far better than Visual Studio Code.

Java – Use IntelliJ

Go – Goland.

Python – PyCharm.

C or C++ – CLion.

If you’re using a more unusual language, maybe Rust, Visual Studio Code might be a good choice.

Comments:

#1: Just chipping in here. I used to be a massive visual studio fan boy and loved my fancy gui for doing things without knowing what was actually happening. I’ve been using vscode and Linux for a few years now and am really enjoying the bare metal exposure you get with working on it (and linux) typing commands is way faster to get things done than mouse clicking through a bunch of guis. Both are good though.

#2:  C# is unusual in that it’s the only language which doesn’t follow the maxim, “if JetBrains have blessed your language with attention, use their IDE”.

Visual Studio really is first class.

#3: for Rust as long as you have rust-analyzer and clippy, you’re good to go. Vim with lua and VS Code both work perfectly.

#4: This is definitely skirting the realm of opinion. It’s a great piece of software. There is better and worse stuff but it all depends upon the person using it, their skill, and style of development.

#5: VSCode is excellent for coding. I’ve been using it for about 6 years now, mainly for Python work, but also developing JS based mobile apps. I mainly use Visual Studio, but VSC’s slightly stripped back nature has been embellished with plenty of updates and more GUI discovery methods, plus that huge extensions library (I’ve worked with the creation of an intellisense style plugin as well).

I’m personally a fan of keeping it simple on IDEs, and I work in a lot of languages. I’m not installing 6 or 7 IDEs because they apparently have advantages in that specific language, so I’d rather install one IDE which can do a credible job on all of them.

I’m more a fan of developing software than getting anally retentive about knowing all the keyboard shortcuts to format a source file. Life’s too short for that. Way too short!

To each their own. Enjoy whatever you use!

Dmitry Aliev is correct that this was introduced into the language before references.

I’ll take this question as an excuse to add a bit more color to this.

C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this pointer parameter to member functions. E.g.:

Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

was translated to something like:

  • int f__1S(S *this); 

(the funny name f__1S is just an example of a possible “mangling” of the name of S::f, which allows traditional linkers to deal with the richer naming environment of C++).

What might comes as a surprise to the modern C++ programmer is that in that model this is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this remained valid in constructors and destructors even though this had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.

When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

In other words, the expression this is now effectively a kind of alias for &__this, where __this is just a name I made up for an unnamable implicit reference parameter.

C++11 further tweaked this by introducing syntax to control the kind of reference that this is bound from. E.g.,

struct S

That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this:

C++_pointer_and_not_reference5b

 
 

After that language feature was released, we started getting many reports of buggy programs that “captured” this thinking they captured the class value, when instead they really wanted to capture __this (or *this). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:

  • we introduced the ability to capture *this
  • we allowed [=, this] since now [this] is really a “by reference” capture of *this
  • even though [this] was now a “by reference” capture, we left in the ability to write [&, this], despite it being redundant (compatibility with earlier standards)

Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):

Why is C++ "this" a pointer and not a reference?

In that example, the “object parameter” (i.e., the previously hidden reference parameter __this) is now an explicit parameter and it is no longer a reference!

Here is another example (also from the paper):

 

Why is C++ "this" a pointer and not a reference?

Here:

  • the type of the object parameter is a deducible template-dependent type
  • the deduction actually allows a derived type to be found

This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.

It adds some extra steps in design, testing and deployment for sure. But it can buy you an easier path to scalability and an easier path to fault tolerance and live system upgrades.

It’s not REST itself that enables that. But if you use REST you will have split your code up into independently deployable chunks called services.

So more development work to do, yes, but you get something a single monolith can’t provide. If you need that, then the REST service approach is a quick way to doing it.

We must compare like for like in terms of results for questions like this.

Because at the time, there was likely no need.

Based on what I could find, the strtok library function appeared in System III UNIX some time in 1980.

In 1980, memory was small, and programs were single threaded. I don’t know whether UNIX had any support for multiple processors, even. I think that happened a few years later.

Its implementation was quite simple.

Why didn't the C library designers make strtok() explicitly store the state to allow working on multiple strings at the same time?

 

This was 3 years before they started the standardization process, and 9 years before it was standardized in ANSI C.

This was simple and good enough, and that’s what mattered most. It’s far from the only library function with internal state.

And Lex/YACC took over more complex scanning and parsing tasks, so it probably didn’t get a lot of attention for the lightweight uses it was put to.

For a tongue-in-cheek take on how UNIX and C were developed, read this classic:

 
The Rise of “Worse is Better” By Richard Gabriel I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase “the right thing.” To such a designer it is important to get all of the following characteristics right: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation. · Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed. · Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness. I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the “MIT approach.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation. The worse-is-better philosophy is only slightly different: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design. · Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct. · Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface. Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the “New Jersey approach.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach. Let me start out by retelling a story that shows that the MIT/New-Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better.
 
 

Because the ‘under the hood’ code is about 50 years old. I’m not kidding. I worked on some video poker machines that were made in the early 1970’s.

Here’s how they work.

You have an array of ‘cards’ from 0 to 51. Pick one at random. Slap it in position 1 and take it out of your array. Do the same for the next card … see how this works?

Video poker machines are really that simple. They literally simulate a deck of cards.

Anything else, at least in Nevada, is illegal. Let me rephrase that, it is ILLEGAL, in all caps.

If you were to try to make a video poker game (or video keno, or slot machine) in any other way than as close to truly random selection from an ‘array’ of options as you can get, Nevada Gaming will come after you so hard and fast, your third cousin twice removed will have their ears ring for a week.

That is if the Families don’t get you first, and they’re far less kind.

All the ‘magic’ is in the payout tables, which on video poker and keno are literally posted on every machine. If you can read them, you can figure out exactly what the payout odds are for any machine.

There’s also a little note at the bottom stating that the video poker machine you’re looking at uses a 52 card deck.

Comments:

1- I have a slot machine and the code on the odds chip looks much like an excel spread sheet every combination is displayed in this spread sheet, so the exact odds can be listed an payout tables. The machine picks a random number. Let say 452 in 1000. the computer looks at the spread sheet and says that this is the combination of bar bar 7 and you get 2 credits for this combination. The wheels will spin to match the indication on the spread sheet. If I go into the game diagnostics I can see if it is a win or not, you do not win on what the wheels display, but the actual number from the spread sheet. The games knows if you won or lost before the wheels stop.

2- I had a conversation with a guy who had retired from working in casino security. He was also responsible for some setup and maintenance on slot machines, video poker and others. I asked about the infamous video poker machine that a programmer at the manufacturer had put in a backdoor so he and a few pals could get money. That was just before he’d started but he knew how it was done. IIRC there was a 25 step process of combinations of coin drops and button presses to make the machine hit a royal flush to pay the jackpot.

Slot machines that have mechanical reels actually run very large virtual reels. The physical reels have position encoders so the electronics and software can select which symbol to stop on. This makes for far more possible combinations than relying on the space available on the physical reels.

Those islands of machines with the sign that says 95% payout? Well, you guess which machine in the group is set to that payout % while the rest are much closer to the minimum allowed.

Machines with a video screen that gives you a choice of things to select by touch or button press? It doesn’t matter what you select, the outcome is pre-determined. For example, if there’s a grid of spots and the first three matches you get determines how many free spins you get, if the code stopped on giving you 7 free spins, out of a possible maximum of 25, you’re getting 7 free spins no matter which spots you touch. It will tease you with a couple of 25s, a 10 or 15 or two, but ultimately you’ll get three 7s, and often the 3rd 25 will be close to the other two or right next to the last 7 “you” selected to make you feel like you just missed it when the full grid is briefly revealed.

There was a Discovery Channel show where the host used various power tools to literally hack things apart to show their insides and how they worked. In one episode he sawed open a couple of slot machines, one from the 1960’s and a purely mechanical one from the 1930’s or possibly 1940’s. In that old machine he discovered the casino it had been in decades prior had installed a cheat. There was a metal wedge bolted into the notch for the 7 on one reel so it could never hit the 777 jackpot. I wondered if the Nevada Gaming Commission could trace the serial number and if they could levy a fine if the company that had owned and operated it was still in business.

3- Slightly off-topic. I worked for a company that sold computer hardware, one of our customers was the company that makes gambling machines. They said that they spent close to $0 on software and all their budget on licensing characters

This question is like asking why you would ever use int when you have the Integer class. Java programmers seem especially zealous about everything needing to be wrapped, and wrapped, and wrapped.

Yes, ArrayList<Integer> does everything that int[] does and more… but sometimes all you need to do is swat a fly, and you just need a flyswatter, not a machine-gun.

Did you know that in order to convert int[] to ArrrayList<Integer>, the system has to go through the array elements one at a time and box them, which means creating a garbage-collected object on the heap (i.e. Integer) for each individual int in the array? That’s right; if you just use int[], then only one memory alloc is needed, as opposed to one for each item.

I understand that most Java programmers don’t know about that, and the ones who do probably don’t care. They will say that this isn’t going to be the reason your program is running slowly. They will say that if you need to care about those kinds of optimizations, then you should be writing code in C++ rather than Java. Yadda yadda yadda, I’ve heard it all before. Personally though, I think that you should know, and should care, because it just seems wasteful to me. Why dynamically allocate n individual objects when you could just have a contiguous block in memory? I don’t like waste.

I also happen to know that if you have a blasé attitude about performance in general, then you’re apt to be the sort of programmer who unknowingly, unnecessarily writes four nested loops and then has no idea why their program took ten minutes to run even though the list was only 100 elements long. At that point, not even C++ will save you from your inefficiently written code. There’s a slippery slope here.

I believe that a software developer is a sort of craftsman. They should understand their craft, not only at the language level, but also how it works internally. They should convert int[] to ArrayList<Integer> only because they know the cost is insignificant, and they have a particular reason for doing so other than “I never use arrays, ArrayList is better LOL”.

Very similar, yes.

Both languages feature:

  • Static typing
  • nominative interface typing
  • garbage collection
  • class based
  • single dispatch polymorphism

so whilst syntax differs, the key things that separate OO support across languages are the same.

There are differences but you can write the same design of OO program in either language and it won’t look out of place

Last time I needed to write an Android app, even though I already knew Java, I still went with Kotlin 😀

I’d rather work in a language I don’t know than… Java… and yes, I know a decent Java IDE can auto-generate this code – but this only solves the problem of writing the code, it doesn’t solve the problem of having to read it, which happens a lot more than writing it.

I mean, which of the below conveys the programmer’s intent more clearly, and which one would you rather read when you forget what a part of the program does and need a refresher:

Even if both of them required no effort to write… the Java version is pure brain poison…

Because it’s insufficient to deal with the memory semantics of current computers. In fact, it was obsolete almost as soon as it first became available.

Volatile tells a compiler that it may not assume the value of a memory location has not changed between reads or writes. This is sometimes sufficient to deal with memory-mapped hardware registers, which is what it was originally for.

But that doesn’t deal with the semantics of a multiprocessor machine’s cache, where a memory location might be written and read from several different places, and we need to be sure we know when written values will be observable relative to control flow in the writing thread.

Instead, we need to deal with acquire/release semantics of values, and the compilers have to output the right machine instructions that we get those semantics from the real machines. So, the atomic memory intrinsics come to the rescue. This is also why inline assembler acts as an optimization barrier; before there were intrinsics for this, it was done with inline assembler. But intrinsics are better, because the compiler can still do some optimization with them.

C++ is a programming language specified through a standard that is “abstract” in various ways. For example, that standard doesn’t currently formally recognize a notion of “runtime” (I would actually like to change that a little bit in the future, but we’ll see).

Now, in order to allow implementations to make assumptions it removes certain situations from the responsibility of the implementation. For example, it doesn’t require (in general) that the implementation ensure that accesses to objects are within the bounds of those objects. By dropping that requirement, the code for valid accesses can be more efficient than would be required if out-of-bounds situations were the responsibility of the implementation (as is the case in most other modern programming languages). Those “situations” are what we call “undefined behaviour”: The implementation has no specific responsibilities and so the standard allows “anything” to happen. This is in part why C++ is still very successful in applications that call for the efficient use of hardware resources.

Note, however, that the standard doesn’t disallow an implementation from doing something that is implementation-specified in those “undefined behaviour” situations. It’s perfectly all right (and feasible) for a C++ implementation to be “memory safe” for example (e.g., not attempt access outside of object bounds). Such implementations have existed in the past (and might still exist, but I’m not currently aware of one that completely “contains” undefined behaviour).

ADDENDUM (July 16th, 2021):

The following article about undefined behavior crossed my metaphorical desk today:

To Conclude:

Coding is a process of translating and transforming a problem into a step by step set of instructions for a machine. Just like every skill, it requires time and practice to learn coding. However, by following some simple tips, you can make the learning process easier and faster. First, it is important to start with the basics. Do not try to learn too many programming languages at once. It is better to focus on one language and master it before moving on to the next one. Second, make use of resources such as books, online tutorials, and coding bootcamps. These can provide you with the structure and support you need to progress quickly. Finally, practice regularly and find a mentor who can offer guidance and feedback. By following these tips, you can develop the programming skills you need to succeed in your career.

There are plenty of resources available to help you improve your coding skills. Check out some of our favorite coding tips below:

– Find a good code editor and learn its shortcuts. This will save you time in the long run.
– Do lots of practice exercises. It’s important to get comfortable with the syntax and structure of your chosen programming language.
– Get involved in the coding community. There are many online forums and groups where programmers can ask questions, share advice, and collaborate on projects.
– Read code written by experienced developers. This will give you insight into best practices and advanced techniques.

What are the Greenest or Least Environmentally Friendly Programming Languages?




https://youtube.com/@djamgatech

How many spaces is a tab in Java, Rust, C++, Python, C#, Powershell, Golang, Javascript

How many spaces is a tab in Java, Rust, C++, Python, C#, Powershell, Golang, Javascript

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

How many spaces is a tab in Java, Rust, C+, Python, C#, Powershell, Golang, etc.

A tab is not made out of spaces. It is a tab, whether in Java, Python, Rust, or generic text file editing. It is represented by a single Unicode character, U+0009.

It does not generally mean “insert this many spaces here” either. It means “put the cursor at the next closest tab stop in the line”. What that means exactly depends on the context. On an old typewriter I had, a tab key would advance the roller to the next column that was a multiple of 10

How many spaces is a tab in Java

That is pretty much the same function as the tab character does.

Just for reference, modern text editors have their tab stops set to either every 4 or every 8 characters. That doesn’t mean that 1 tab = 4/8 spaces, that means that putting in a tab will align the cursor with the next multiple of 4/8 columns

How many spaces is a tab in Java

In mainstream IDEs you can set the tab key to insert a desired number of spaces instead of a tab character.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

 The concept of tab independent of space is rarely used these days. In any case, what the character represents is decoupled from what the key does is decoupled from what the screen shows.

In many IDEs, the tab character inserts the required number of spaces to advance to the next tab line. This is often a default.

I imagine it’s a compromise between tab loving extremists and space advocates. The ideal whitespace character is a subject of intense debate among programmers.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Source: Quora

  • First stab playing with ShaderLab in Unity
    by /u/soultrip (programming) on April 26, 2024 at 4:08 pm

    submitted by /u/soultrip [link] [comments]

  • Best caching solution with low latency (not Redis)? Thinking of using dragonfly
    by /u/adventurous_popcorn (programming) on April 26, 2024 at 4:07 pm

    Helloo any suggestions or thoughts are appreciated! Looking for a modern solution that supports dynamic changes containerization scaling up scaling down with Persistent volume groups. That is also Easily Scalable ,high performance ,high elasticity and cloud native. submitted by /u/adventurous_popcorn [link] [comments]

  • Version Control: Git Basics
    by /u/itsrorymurphy (programming) on April 26, 2024 at 4:03 pm

    submitted by /u/itsrorymurphy [link] [comments]

  • How do framerate patches work: my framerate patches explained
    by /u/Whatcookie_ (programming) on April 26, 2024 at 4:00 pm

    submitted by /u/Whatcookie_ [link] [comments]

  • Post-Quantum Cryptography: Preparing for the Future of Security
    by /u/robertinoc (programming) on April 26, 2024 at 3:34 pm

    submitted by /u/robertinoc [link] [comments]

  • ClangQL: now supports functions all info, global variables, source locations
    by /u/AmrDeveloper (programming) on April 26, 2024 at 3:33 pm

    submitted by /u/AmrDeveloper [link] [comments]

  • How I built a web database client after getting tired of clunky desktop clients
    by /u/andriosr (programming) on April 26, 2024 at 3:29 pm

    submitted by /u/andriosr [link] [comments]

  • Your Guide to Ensure OnTime Delivery
    by /u/0ff5ec (programming) on April 26, 2024 at 3:20 pm

    submitted by /u/0ff5ec [link] [comments]

  • Google delays cookie phase-out once again
    by /u/DonutAccomplished422 (programming) on April 26, 2024 at 2:25 pm

    submitted by /u/DonutAccomplished422 [link] [comments]

  • Open sourcing 200k lines of Convex, a "reactive" database built from scratch in Rust
    by /u/Pinkman (programming) on April 26, 2024 at 1:36 pm

    submitted by /u/Pinkman [link] [comments]

  • Ditch dotenv: Node.js Now Natively Supports .env File Loading
    by /u/hizacharylee (programming) on April 26, 2024 at 1:21 pm

    submitted by /u/hizacharylee [link] [comments]

  • Refactoring in Ruby | Data Clumps
    by kroolar (Programming on Medium) on April 26, 2024 at 12:27 pm

    Data clumps refer to groups of variables that are frequently found together in different parts of the codebase. These variables often…Continue reading on Medium »

  • Research on Multi-modal Fusion part1(AGI route)
    by Monodeep Mukherjee (Programming on Medium) on April 26, 2024 at 12:24 pm

    MM-Gaussian: 3D Gaussian-based Multi-modal Fusion for Localization and Reconstruction in Unbounded Scenes(arXiv)Continue reading on Medium »

  • Basic Algorithms
    by mahdi (Programming on Medium) on April 26, 2024 at 12:22 pm

    Explore core algorithms (search, sort, traverse) & dive deeper! My repo & social media await your feedback & algorithmic adventures!Continue reading on Medium »

  • New Insights into Knowledge Graph Completion part4(Data Mining 2024)
    by Monodeep Mukherjee (Programming on Medium) on April 26, 2024 at 12:17 pm

    Uncertainty-Aware Relational Graph Neural Network for Few-Shot Knowledge Graph Completion(arXiv)Continue reading on Medium »

  • Ditch dotenv: Node.js Now Natively Supports .env File Loading
    by Zachary Lee (Programming on Medium) on April 26, 2024 at 12:16 pm

    Explore the new built-in features in Node.js that make managing environment variables simpler and more integrated.Continue reading on JavaScript in Plain English »

  • New Insights into Knowledge Graph Completion part3(Data Mining 2024)
    by Monodeep Mukherjee (Programming on Medium) on April 26, 2024 at 12:16 pm

    IME: Integrating Multi-curvature Shared and Specific Embedding for Temporal Knowledge Graph Completion(arXiv)Continue reading on Medium »

  • New Insights into Knowledge Graph Completion part2(Data Mining 2024)
    by Monodeep Mukherjee (Programming on Medium) on April 26, 2024 at 12:14 pm

    MyGO: Discrete Modality Information as Fine-Grained Tokens for Multi-modal Knowledge Graph Completion(arXiv)Continue reading on Medium »

  • What are the benefits and uses of DevOps?
    by Gantasofttraining (Programming on Medium) on April 26, 2024 at 12:11 pm

    DevOps brings faster time to market, improved collaboration, increased efficiency, reduced risk, and better quality software.Continue reading on Medium »

  • What are the benefits and uses of DevOps?
    by Gantasofttraining (Programming on Medium) on April 26, 2024 at 12:09 pm

    Continue reading on Medium »

  • What are the different or various data types in Python?
    by Gantasofttraining (Programming on Medium) on April 26, 2024 at 12:08 pm

    Python supports various data types including integers, floats, strings, booleans, lists, tuples, dictionaries, and sets.Continue reading on Medium »

  • 💡What contributing to Open-source is, and what it isn't
    by /u/suchdevblog (programming) on April 26, 2024 at 11:39 am

    submitted by /u/suchdevblog [link] [comments]

  • Open sourcing MS-DOS 4.0
    by /u/RobertVandenberg (programming) on April 26, 2024 at 11:04 am

    submitted by /u/RobertVandenberg [link] [comments]

  • CLI UX best practices: 3 patterns for improving progress displays
    by /u/alexeyr (programming) on April 26, 2024 at 10:51 am

    submitted by /u/alexeyr [link] [comments]

  • Pharo 12, the pure object-oriented language and environment is released!
    by /u/xkriva11 (programming) on April 26, 2024 at 10:44 am

    submitted by /u/xkriva11 [link] [comments]

  • Horror Stories with Jim Weirich [2008]
    by /u/mehdifarsi (programming) on April 26, 2024 at 9:03 am

    submitted by /u/mehdifarsi [link] [comments]

  • Beyond bars and lines: 7 cool ways to visualize data in your dev tool
    by /u/alexeyr (programming) on April 26, 2024 at 8:58 am

    submitted by /u/alexeyr [link] [comments]

  • "90% of Java services have critical or severe security vulnerabilities"... or about the quirks of security reporting
    by /u/CrowSufficient (programming) on April 26, 2024 at 7:34 am

    submitted by /u/CrowSufficient [link] [comments]

  • Ruby might be faster than you think
    by /u/ketralnis (programming) on April 25, 2024 at 10:52 pm

    submitted by /u/ketralnis [link] [comments]

  • Tiny GPU: A minimal GPU implementation in Verilog
    by /u/ketralnis (programming) on April 25, 2024 at 10:51 pm

    submitted by /u/ketralnis [link] [comments]

  • Open Sourcing DOS 4
    by /u/ketralnis (programming) on April 25, 2024 at 10:47 pm

    submitted by /u/ketralnis [link] [comments]

  • Announcing TypeScript 5.5 Beta
    by /u/DanielRosenwasser (programming) on April 25, 2024 at 8:04 pm

    submitted by /u/DanielRosenwasser [link] [comments]

  • How Photoshop solved working with files larger than can fit into memory
    by /u/fagnerbrack (programming) on April 25, 2024 at 1:02 pm

    submitted by /u/fagnerbrack [link] [comments]

  • Top VS Code Extensions That Make Your Life Easier as a Programmer
    by /u/ImpressiveContest283 (programming) on April 25, 2024 at 10:31 am

    submitted by /u/ImpressiveContest283 [link] [comments]

  • "Yes, Please Repeat Yourself" and other Software Design Principles I Learned the Hard Way
    by /u/Rtzon (programming) on April 25, 2024 at 5:50 am

    submitted by /u/Rtzon [link] [comments]

  • [META] The future of r/programming
    by /u/ketralnis (programming) on October 9, 2023 at 4:03 pm

    Hello fellow programs! tl;dr what should r/programming's rules be? And also a call for additional mods. We'll leave this stickied for a few days to gather feedback. Here are the broad categories of content that we see, along with whether they are currently allowed. ✅ means that it's currently allowed, 🚫 means that it's not currently allowed, ⚠️ means that we leave it up if it is already popular but if we catch it young in its life we do try to remove it early. ✅ Actual programming content. They probably have actual code in them. Language or library writeups, papers, technology descriptions. How an allocator works. How my new fancy allocator I just wrote works. How our startup built our Frobnicator, rocket ship emoji. For many years this was the only category of allowed content. ✅ Programming news. ChatGPT can write code. A big new CVE just dropped. Curl 8.01 released now with Coffee over IP support. ✅ Programmer career content. How to become a Staff engineer in 30 days. Habits of the best engineering managers. How to deal with your annoying coworkers, Jeff. ✅ Articles/news interesting to programmers but not about programming. Work from home is bullshit. Return to office is bullshit. There's a Steam sale on programming games. Terry Davis has died. How to SCRUMM. App Store commissions are going up. How to hire a more diverse development team. Interviewing programmers is broken. ⚠️ General technology news. Google buys its last competitor. A self driving car hit a pedestrian. Twitter is collapsing. Oculus accidentally showed your grandmother a penis. Github sued when Copilot produces the complete works of Harry Potter in a code comment. Meta cancels work from home. Gnome dropped a feature I like. How to run Stable Diffusion to generate pictures of, uh, cats, yeah it's definitely just for cats. A bitcoin VR metaversed my AI and now my app store is mobile social local. 🚫 Politics. The Pirate Party is winning in Sweden. Please vote for net neutrality. Big Tech is being sued in Europe for gestures broadly. 🚫 Gossip. Richard Stallman switches to Windows. Elon Musk farted. Linus Torvalds was a poopy-head on a mailing list. Grace Hopper Conference is now 60% male. The People's Rust Foundation is arguing with the Rust Foundation For The People. Terraform has been forked into Terra and Form. Stack Overflow sucks now. Stack Overflow is good actually. ✅ Demos with code. I wrote a game, here it is on GitHub 🚫 Demos without code. I wrote a game, come buy it! Please give me feedback on my startup (totally not an ad nosirree). I stayed up all night writing a commercial text editor, here's the pricing page. I made a DALL-E image generator. I made the fifteenth animation of A* this week, here's a GIF. 🚫 AskReddit type forum questions. What's your favourite programming language? Tabs or spaces? Does anyone else hate it when. 🚫 Support questions. How do I write a web crawler? How do I get into programming? Where's my missing semicolon? Please do this obvious homework problem for me. Personally I feel very strongly about not allowing these because they'd quickly drown out all of the actual content I come to see, and there are already much more effective places to get them answered anyway. In real life the quality of the ones that we see is also universally very low. 🚫 Surveys and 🚫 Job postings and anything else that is looking to extract value from a place a lot of programmers hang out without contributing anything itself. 🚫 Meta posts. DAE think r/programming sucks? Why did you remove my post? Why did you ban this user that is totes not me I swear I'm just asking questions. Except this meta post. This one is okay because I'm a tyrant that the rules don't apply to (I assume you are saying about me to yourself right now). 🚫 Images, memes, anything low-effort or low-content. Thankfully we very rarely see any of this so there's not much to remove but like support questions once you have a few of these they tend to totally take over because it's easier to make a meme than to write a paper and also easier to vote on a meme than to read a paper. ⚠️ Posts that we'd normally allow but that are obviously, unquestioningly super low quality like blogspam copy-pasted onto a site with a bazillion ads. It has to be pretty bad before we remove it and even then sometimes these are the first post to get traction about a news event so we leave them up if they're the best discussion going on about the news event. There's a lot of grey area here with CVE announcements in particular: there are a lot of spammy security "blogs" that syndicate stories like this. ⚠️ Posts that are duplicates of other posts or the same news event. We leave up either the first one or the healthiest discussion. ⚠️ Posts where the title editorialises too heavily or especially is a lie or conspiracy theory. Comments are only very loosely moderated and it's mostly 🚫 Bots of any kind (Beep boop you misspelled misspelled!) and 🚫 Incivility (You idiot, everybody knows that my favourite toy is better than your favourite toy.) However the number of obvious GPT comment bots is rising and will quickly become untenable for the number of active moderators we have. There are some topics such as Code of Conduct arguments within projects that I don't know where to place where we've been doing a civility check on the comments thread and using that to make the decision. Similarly some straddle the line (a link to a StackOverflow post asking for help and the reddit OP is the StackOverflow OP, but there's a lot of technical content and the reddit discussion is healthy). And even most 🚫s above are left up if there's a healthy discussion going already by the time we see it. So what now? We need to decide what r/programming should be about and we need to write those rules down so that mods can consistently apply them. The rules as written are pretty vague and the way we're moderating in practise is only loosely connected to them. We're looking for feedback on what kind of place r/programming should be so tell us below. We need additional mods. If you're interested in helping moderate please post below, saying why you'd be a good mod and what you'd would change about the space if you were. You don't need to be a moderator elsewhere but please do mention it if you are and what we could learn on r/programming that you already know. Currently I think I'm the only one going down the new page every morning and removing the rule-breaking posts. (Today these are mostly "how do I program computer" or "can somebody help me fix my printer", and obvious spam.) This results in a lot of threads complaining about the moderation quality and, well, it's not wrong. I'm not rigorously watching the mod queue and I'm not trawling comments threads looking for bad actors unless I'm in that thread anyway and I don't use reddit every single day. So if we want it to be better we'll need more human power. FAQ: Why do we need moderation at all? Can't the votes just do it? We know there is demand for unmoderated spaces in the world, but r/programming isn't that space. This is our theory on why keeping the subreddit on topic is important: Forums have the interesting property that whatever is on the front page today is what will be on the front page tomorrow. When a user comes to the site and sees a set of content, they believe that that's what this website is about. If they like it they'll stay and contribute that kind of content and if they don't like it they won't stay, leaving only the people that liked the content they saw yesterday. So the seed content is important and keeping things on topic is important. If you like r/programming then you need moderation to keep it the way that you like it (or make it be the place you wish it were) because otherwise entropic drift will make it be a different place. And once you have moderation it's going to have a subjective component, that's just the nature of it. Because of the way reddit works, on a light news day r/programming doesn't get enough daily content for articles to meaningfully compete with each other. Towards the end of the day if I post something to r/programming it will immediately go to the front page of all r/programming subscribers. So while it's true that sub-par and rule-breaking posts already do most of their damage before the mods even see them, the whole theory of posts competing via votes alone doesn't really work in a lower-volume subreddit. Because of the mechanics of moderation it's not really possible to allow the subreddit to be say 5% support questions. Even if we wanted to allow it to be a small amount of the conten, the individuals whose content was removed would experience and perceive this as a punitive action against them. That means that any category we allow could theoretically completely take over r/programming (like the career posts from last week) so we should only allow types of content that we'd be okay with taking it over. Personally my dream is for r/programming to be the place with the highest quality programming content, where I can go to read something interesting and learn something new every day. That dream is at odds with allowing every piece of blogspam and "10 ways to convince your boss to give you a raise, #2 will get you fired!" submitted by /u/ketralnis [link] [comments]

Top 100 Data Science and Data Analytics and Data Engineering Interview Questions and Answers

Data Science Bias Variance Trade-off

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Below and the Top 100 Data Science and Data Analytics Interview Questions and Answers dumps.

What is Data Science? 

Data Science is a blend of various tools, algorithms, and machine learning principles with the goal to discover hidden patterns from the raw data. How is this different from what statisticians have been doing for years? The answer lies in the difference between explaining and predicting: statisticians work a posteriori, explaining the results and designing a plan; data scientists use historical data to make predictions.

Top 100 Data Science and Data Analytics and  Data Engineering Interview Questions and Answers
AWS Data analytics DAS-C01 Exam Prep
 
 
 
 
AWS Data analytics DAS-C01 Exam Prep PRO App:
Very Similar to real exam, Countdown timer, Score card, Show/Hide Answers, Cheat Sheets, FlashCards, Detailed Answers and References
No ADS, Access All Quiz Detailed Answers, Reference and Score Card
 
 

How does data cleaning play a vital role in the analysis? 

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Data cleaning can help in analysis because:

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

What is linear regression? What do the terms p-value, coefficient, and r-squared value mean? What is the significance of each of these components?

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

Reference  

Imagine you want to predict the price of a house. That will depend on some factors, called independent variables, such as location, size, year of construction… if we assume there is a linear relationship between these variables and the price (our dependent variable), then our price is predicted by the following function: Y = a + bX
The p-value in the table is the minimum I (the significance level) at which the coefficient is relevant. The lower the p-value, the more important is the variable in predicting the price. Usually we set a 5% level, so that we have a 95% confidentiality that our variable is relevant.
The p-value is used as an alternative to rejection points to provide the smallest level of significance at which the null hypothesis would be rejected. A smaller p-value means that there is stronger evidence in favor of the alternative hypothesis.
The coefficient value signifies how much the mean of the dependent variable changes given a one-unit shift in the independent variable while holding other variables in the model constant. This property of holding the other variables constant is crucial because it allows you to assess the effect of each variable in isolation from the others.
R squared (R2) is a statistical measure that represents the proportion of the variance for a dependent variable that’s explained by an independent variable or variables in a regression model.

Credit: Steve Nouri


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

What is sampling? How many sampling methods do you know? 

Reference

 

Data sampling is a statistical analysis technique used to select, manipulate and analyze a representative subset of data points to identify patterns and trends in the larger data set being examined. It enables data scientists, predictive modelers and other data analysts to work with a small, manageable amount of data about a statistical population to build and run analytical models more quickly, while still producing accurate findings.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Sampling can be particularly useful with data sets that are too large to efficiently analyze in full – for example, in big data analytics applications or surveys. Identifying and analyzing a representative sample is more efficient and cost-effective than surveying the entirety of the data or population.
An important consideration, though, is the size of the required data sample and the possibility of introducing a sampling error. In some cases, a small sample can reveal the most important information about a data set. In others, using a larger sample can increase the likelihood of accurately representing the data as a whole, even though the increased size of the sample may impede ease of manipulation and interpretation.
There are many different methods for drawing samples from data; the ideal one depends on the data set and situation. Sampling can be based on probability, an approach that uses random numbers that correspond to points in the data set to ensure that there is no correlation between points chosen for the sample. Further variations in probability sampling include:

Simple random sampling: Software is used to randomly select subjects from the whole population.
• Stratified sampling: Subsets of the data sets or population are created based on a common factor,
and samples are randomly collected from each subgroup. A sample is drawn from each strata (using a random sampling method like simple random sampling or systematic sampling).
o EX: In the image below, let’s say you need a sample size of 6. Two members from each
group (yellow, red, and blue) are selected randomly. Make sure to sample proportionally:
In this simple example, 1/3 of each group (2/6 yellow, 2/6 red and 2/6 blue) has been
sampled. If you have one group that’s a different size, make sure to adjust your
proportions. For example, if you had 9 yellow, 3 red and 3 blue, a 5-item sample would
consist of 3/9 yellow (i.e. one third), 1/3 red and 1/3 blue.
• Cluster sampling: The larger data set is divided into subsets (clusters) based on a defined factor, then a random sampling of clusters is analyzed. The sampling unit is the whole cluster; Instead of sampling individuals from within each group, a researcher will study whole clusters.
o EX: In the image below, the strata are natural groupings by head color (yellow, red, blue).
A sample size of 6 is needed, so two of the complete strata are selected randomly (in this
example, groups 2 and 4 are chosen).

Data Science Stratified Sampling - Cluster Sampling
Data Science Stratified Sampling – Cluster Sampling

– Cluster Sampling

  • Multistage sampling: A more complicated form of cluster sampling, this method also involves dividing the larger population into a number of clusters. Second-stage clusters are then broken out based on a secondary factor, and those clusters are then sampled and analyzed. This staging could continue as multiple subsets are identified, clustered and analyzed.
    • Systematic sampling: A sample is created by setting an interval at which to extract data from the larger population – for example, selecting every 10th row in a spreadsheet of 200 items to create a sample size of 20 rows to analyze.

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Sampling can also be based on non-probability, an approach in which a data sample is determined and extracted based on the judgment of the analyst. As inclusion is determined by the analyst, it can be more difficult to extrapolate whether the sample accurately represents the larger population than when probability sampling is used.

Non-probability data sampling methods include:
• Convenience sampling: Data is collected from an easily accessible and available group.
• Consecutive sampling: Data is collected from every subject that meets the criteria until the predetermined sample size is met.
• Purposive or judgmental sampling: The researcher selects the data to sample based on predefined criteria.
• Quota sampling: The researcher ensures equal representation within the sample for all subgroups in the data set or population (random sampling is not used).

Quota sampling
Quota sampling

Once generated, a sample can be used for predictive analytics. For example, a retail business might use data sampling to uncover patterns about customer behavior and predictive modeling to create more effective sales strategies.

Credit: Steve Nouri

What are the assumptions required for linear regression?

There are four major assumptions:

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

There is a linear relationship between the dependent variables and the regressors, meaning the model you are creating actually fits the data,
• The errors or residuals of the data are normally distributed and independent from each other,
• There is minimal multicollinearity between explanatory variables, and
• Homoscedasticity. This means the variance around the regression line is the same for all values of the predictor variable.

What is a statistical interaction?

Reference: Statistical Interaction

Basically, an interaction is when the effect of one factor (input variable) on the dependent variable (output variable) differs among levels of another factor. When two or more independent variables are involved in a research design, there is more to consider than simply the “main effect” of each of the independent variables (also termed “factors”). That is, the effect of one independent variable on the dependent variable of interest may not be the same at all levels of the other independent variable. Another way to put this is that the effect of one independent variable may depend on the level of the other independent
variable. In order to find an interaction, you must have a factorial design, in which the two (or more) independent variables are “crossed” with one another so that there are observations at every
combination of levels of the two independent variables. EX: stress level and practice to memorize words: together they may have a lower performance. 

What is selection bias? 

Reference

Selection (or ‘sampling’) bias occurs when the sample data that is gathered and prepared for modeling has characteristics that are not representative of the true, future population of cases the model will see.
That is, active selection bias occurs when a subset of the data is systematically (i.e., non-randomly) excluded from analysis.

Selection bias is a kind of error that occurs when the researcher decides what has to be studied. It is associated with research where the selection of participants is not random. Therefore, some conclusions of the study may not be accurate.

The types of selection bias include:
Sampling bias: It is a systematic error due to a non-random sample of a population causing some members of the population to be less likely to be included than others resulting in a biased sample.
Time interval: A trial may be terminated early at an extreme value (often for ethical reasons), but the extreme value is likely to be reached by the variable with the largest variance, even if all variables have a similar mean.
Data: When specific subsets of data are chosen to support a conclusion or rejection of bad data on arbitrary grounds, instead of according to previously stated or generally agreed criteria.
Attrition: Attrition bias is a kind of selection bias caused by attrition (loss of participants)
discounting trial subjects/tests that did not run to completion.

What is an example of a data set with a non-Gaussian distribution?

Reference

The Gaussian distribution is part of the Exponential family of distributions, but there are a lot more of them, with the same sort of ease of use, in many cases, and if the person doing the machine learning has a solid grounding in statistics, they can be utilized where appropriate.

Binomial: multiple toss of a coin Bin(n,p): the binomial distribution consists of the probabilities of each of the possible numbers of successes on n trials for independent events that each have a probability of p of
occurring.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Bernoulli: Bin(1,p) = Be(p)
Poisson: Pois(λ)

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

 

What is a confusion matrix?

The confusion matrix is a 2X2 table that contains 4 outputs provided by the binary classifier.

 Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

A data set used for performance evaluation is called a test data set. It should contain the correct labels and predicted labels. The predicted labels will exactly the same if the performance of a binary classifier is perfect. The predicted labels usually match with part of the observed labels in real-world scenarios.
A binary classifier predicts all data instances of a test data set as either positive or negative. This produces four outcomes: TP, FP, TN, FN. Basic measures derived from the confusion matrix:

What is the difference between “long” and “wide” format data?

In the wide-format, a subject’s repeated responses will be in a single row, and each response is in a separate column. In the long-format, each row is a one-time point per subject. You can recognize data in wide format by the fact that columns generally represent groups (variables).

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

difference between “long” and “wide” format data

What do you understand by the term Normal Distribution?

Data is usually distributed in different ways with a bias to the left or to the right or it can all be jumbled up. However, there are chances that data is distributed around a central value without any bias to the left or right and reaches normal distribution in the form of a bell-shaped curve.

Data Science: Normal Distribution

The random variables are distributed in the form of a symmetrical, bell-shaped curve. Properties of Normal Distribution are as follows:

1. Unimodal (Only one mode)
2. Symmetrical (left and right halves are mirror images)
3. Bell-shaped (maximum height (mode) at the mean)
4. Mean, Mode, and Median are all located in the center
5. Asymptotic

What is correlation and covariance in statistics?

Correlation is considered or described as the best technique for measuring and also for estimating the quantitative relationship between two variables. Correlation measures how strongly two variables are related. Given two random variables, it is the covariance between both divided by the product of the two standard deviations of the single variables, hence always between -1 and 1.

correlation and covariance

Covariance is a measure that indicates the extent to which two random variables change in cycle. It explains the systematic relation between a pair of random variables, wherein changes in one variable reciprocal by a corresponding change in another variable.

correlation and covariance in statistics

What is the difference between Point Estimates and Confidence Interval? 

Point Estimation gives us a particular value as an estimate of a population parameter. Method of Moments and Maximum Likelihood estimator methods are used to derive Point Estimators for population parameters.

A confidence interval gives us a range of values which is likely to contain the population parameter. The confidence interval is generally preferred, as it tells us how likely this interval is to contain the population parameter. This likeliness or probability is called Confidence Level or Confidence coefficient and represented by 1 − ∝, where ∝ is the level of significance.

What is the goal of A/B Testing?

It is a hypothesis testing for a randomized experiment with two variables A and B.
The goal of A/B Testing is to identify any changes to the web page to maximize or increase the outcome of interest. A/B testing is a fantastic method for figuring out the best online promotional and marketing strategies for your business. It can be used to test everything from website copy to sales emails to search ads. An example of this could be identifying the click-through rate for a banner ad.

What is p-value?

When you perform a hypothesis test in statistics, a p-value can help you determine the strength of your results. p-value is the minimum significance level at which you can reject the null hypothesis. The lower the p-value, the more likely you reject the null hypothesis.

What do you understand by statistical power of sensitivity and how do you calculate it? 

Sensitivity is commonly used to validate the accuracy of a classifier (Logistic, SVM, Random Forest etc.). Sensitivity = [ TP / (TP +TN)]

 

Why is Re-sampling done?

https://machinelearningmastery.com/statistical-sampling-and-resampling/

  • Sampling is an active process of gathering observations with the intent of estimating a population variable.
  • Resampling is a methodology of economically using a data sample to improve the accuracy and quantify the uncertainty of a population parameter. Resampling methods, in fact, make use of a nested resampling method.

Once we have a data sample, it can be used to estimate the population parameter. The problem is that we only have a single estimate of the population parameter, with little idea of the variability or uncertainty in the estimate. One way to address this is by estimating the population parameter multiple times from our data sample. This is called resampling. Statistical resampling methods are procedures that describe how to economically use available data to estimate a population parameter. The result can be both a more accurate estimate of the parameter (such as taking the mean of the estimates) and a quantification of the uncertainty of the estimate (such as adding a confidence interval).

Resampling methods are very easy to use, requiring little mathematical knowledge. A downside of the methods is that they can be computationally very expensive, requiring tens, hundreds, or even thousands of resamples in order to develop a robust estimate of the population parameter.

The key idea is to resample from the original data — either directly or via a fitted model — to create replicate datasets, from which the variability of the quantiles of interest can be assessed without longwinded and error-prone analytical calculation. Because this approach involves repeating the original data analysis procedure with many replicate sets of data, these are sometimes called computer-intensive methods. Each new subsample from the original data sample is used to estimate the population parameter. The sample of estimated population parameters can then be considered with statistical tools in order to quantify the expected value and variance, providing measures of the uncertainty of the
estimate. Statistical sampling methods can be used in the selection of a subsample from the original sample.

A key difference is that process must be repeated multiple times. The problem with this is that there will be some relationship between the samples as observations that will be shared across multiple subsamples. This means that the subsamples and the estimated population parameters are not strictly identical and independently distributed. This has implications for statistical tests performed on the sample of estimated population parameters downstream, i.e. paired statistical tests may be required. 

Two commonly used resampling methods that you may encounter are k-fold cross-validation and the bootstrap.

  • Bootstrap. Samples are drawn from the dataset with replacement (allowing the same sample to appear more than once in the sample), where those instances not drawn into the data sample may be used for the test set.
  • k-fold Cross-Validation. A dataset is partitioned into k groups, where each group is given the opportunity of being used as a held out test set leaving the remaining groups as the training set. The k-fold cross-validation method specifically lends itself to use in the evaluation of predictive models that are repeatedly trained on one subset of the data and evaluated on a second held-out subset of the data.  

Resampling is done in any of these cases:

  • Estimating the accuracy of sample statistics by using subsets of accessible data or drawing randomly with replacement from a set of data points
  • Substituting labels on data points when performing significance tests
  • Validating models by using random subsets (bootstrapping, cross-validation)

What are the differences between over-fitting and under-fitting?

In statistics and machine learning, one of the most common tasks is to fit a model to a set of training data, so as to be able to make reliable predictions on general untrained data.

In overfitting, a statistical model describes random error or noise instead of the underlying relationship.
Overfitting occurs when a model is excessively complex, such as having too many parameters relative to the number of observations. A model that has been overfitted, has poor predictive performance, as it overreacts to minor fluctuations in the training data.

Underfitting occurs when a statistical model or machine learning algorithm cannot capture the underlying trend of the data. Underfitting would occur, for example, when fitting a linear model to non-linear data.
Such a model too would have poor predictive performance.

 

How to combat Overfitting and Underfitting?

To combat overfitting:
1. Add noise
2. Feature selection
3. Increase training set
4. L2 (ridge) or L1 (lasso) regularization; L1 drops weights, L2 no
5. Use cross-validation techniques, such as k folds cross-validation
6. Boosting and bagging
7. Dropout technique
8. Perform early stopping
9. Remove inner layers
To combat underfitting:
1. Add features
2. Increase time of training

What is regularization? Why is it useful?

Regularization is the process of adding tuning parameter (penalty term) to a model to induce smoothness in order to prevent overfitting. This is most often done by adding a constant multiple to an existing weight vector. This constant is often the L1 (Lasso – |∝|) or L2 (Ridge – ∝2). The model predictions should then minimize the loss function calculated on the regularized training set.

What Is the Law of Large Numbers? 

It is a theorem that describes the result of performing the same experiment a large number of times. This theorem forms the basis of frequency-style thinking. It says that the sample means, the sample variance and the sample standard deviation converge to what they are trying to estimate. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and will tend to become closer to the expected value as more trials are performed.

What Are Confounding Variables?

In statistics, a confounder is a variable that influences both the dependent variable and independent variable.

If you are researching whether a lack of exercise leads to weight gain:
lack of exercise = independent variable
weight gain = dependent variable
A confounding variable here would be any other variable that affects both of these variables, such as the age of the subject.

What is Survivorship Bias?

It is the logical error of focusing aspects that support surviving some process and casually overlooking those that did not work because of their lack of prominence. This can lead to wrong conclusions in numerous different means. For example, during a recession you look just at the survived businesses, noting that they are performing poorly. However, they perform better than the rest, which is failed, thus being removed from the time series.

Explain how a ROC curve works?

The ROC curve is a graphical representation of the contrast between true positive rates and false positive rates at various thresholds. It is often used as a proxy for the trade-off between the sensitivity (true positive rate) and false positive rate.

Data Science ROC Curve

What is TF/IDF vectorization?

TF-IDF is short for term frequency-inverse document frequency, is a numerical statistic that is intended to reflect how important a word is to a document in a collection or corpus. It is often used as a weighting factor in information retrieval and text mining.

Data Science TF IDF Vectorization

The TF-IDF value increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus, which helps to adjust for the fact that some words appear more frequently in general.

Python or R – Which one would you prefer for text analytics?

We will prefer Python because of the following reasons:
• Python would be the best option because it has Pandas library that provides easy to use data structures and high-performance data analysis tools.
• R is more suitable for machine learning than just text analysis.
• Python performs faster for all types of text analytics.

How does data cleaning play a vital role in the analysis? 

Data cleaning can help in analysis because:

  • Cleaning data from multiple sources helps transform it into a format that data analysts or data scientists can work with.
  • Data Cleaning helps increase the accuracy of the model in machine learning.
  • It is a cumbersome process because as the number of data sources increases, the time taken to clean the data increases exponentially due to the number of sources and the volume of data generated by these sources.
  • It might take up to 80% of the time for just cleaning data making it a critical part of the analysis task

Differentiate between univariate, bivariate and multivariate analysis. 

Univariate analyses are descriptive statistical analysis techniques which can be differentiated based on one variable involved at a given point of time. For example, the pie charts of sales based on territory involve only one variable and can the analysis can be referred to as univariate analysis.

The bivariate analysis attempts to understand the difference between two variables at a time as in a scatterplot. For example, analyzing the volume of sale and spending can be considered as an example of bivariate analysis.

Multivariate analysis deals with the study of more than two variables to understand the effect of variables on the responses.

Explain Star Schema

It is a traditional database schema with a central table. Satellite tables map IDs to physical names or descriptions and can be connected to the central fact table using the ID fields; these tables are known as lookup tables and are principally useful in real-time applications, as they save a lot of memory. Sometimes star schemas involve several layers of summarization to recover information faster.

What is Cluster Sampling?

Cluster sampling is a technique used when it becomes difficult to study the target population spread across a wide area and simple random sampling cannot be applied. Cluster Sample is a probability sample where each sampling unit is a collection or cluster of elements.

For example, a researcher wants to survey the academic performance of high school students in Japan. He can divide the entire population of Japan into different clusters (cities). Then the researcher selects a number of clusters depending on his research through simple or systematic random sampling.

What is Systematic Sampling? 

Systematic sampling is a statistical technique where elements are selected from an ordered sampling frame. In systematic sampling, the list is progressed in a circular manner so once you reach the end of the list, it is progressed from the top again. The best example of systematic sampling is equal probability method.

What are Eigenvectors and Eigenvalues? 

Eigenvectors are used for understanding linear transformations. In data analysis, we usually calculate the eigenvectors for a correlation or covariance matrix. Eigenvectors are the directions along which a particular linear transformation acts by flipping, compressing or stretching.
Eigenvalue can be referred to as the strength of the transformation in the direction of eigenvector or the factor by which the compression occurs.

Give Examples where a false positive is important than a false negative?

Let us first understand what false positives and false negatives are:

  • False Positives are the cases where you wrongly classified a non-event as an event a.k.a Type I error
  • False Negatives are the cases where you wrongly classify events as non-events, a.k.a Type II error.

Example 1: In the medical field, assume you have to give chemotherapy to patients. Assume a patient comes to that hospital and he is tested positive for cancer, based on the lab prediction but he actually doesn’t have cancer. This is a case of false positive. Here it is of utmost danger to start chemotherapy on this patient when he actually does not have cancer. In the absence of cancerous cell, chemotherapy will do certain damage to his normal healthy cells and might lead to severe diseases, even cancer.

Example 2: Let’s say an e-commerce company decided to give $1000 Gift voucher to the customers whom they assume to purchase at least $10,000 worth of items. They send free voucher mail directly to 100 customers without any minimum purchase condition because they assume to make at least 20% profit on sold items above $10,000. Now the issue is if we send the $1000 gift vouchers to customers who have not actually purchased anything but are marked as having made $10,000 worth of purchase

Give Examples where a false negative important than a false positive? And vice versa?

Example 1 FN: What if Jury or judge decides to make a criminal go free?

Example 2 FN: Fraud detection.

Example 3 FP: customer voucher use promo evaluation: if many used it and actually if was not true, promo sucks

Give Examples where both false positive and false negatives are equally important? 

In the Banking industry giving loans is the primary source of making money but at the same time if your repayment rate is not good you will not make any profit, rather you will risk huge losses.
Banks don’t want to lose good customers and at the same point in time, they don’t want to acquire bad customers. In this scenario, both the false positives and false negatives become very important to measure.

What is the Difference between a Validation Set and a Test Set?

A Training Set:
• to fit the parameters i.e. weights

A Validation set:
• part of the training set
• for parameter selection
• to avoid overfitting

A Test set:
• for testing or evaluating the performance of a trained machine learning model, i.e. evaluating the
predictive power and generalization.

What is cross-validation?

Reference: k-fold cross validation 

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation. Mainly used in backgrounds where the objective is forecast, and one wants to estimate how accurately a model will accomplish in practice.

Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

The general procedure is as follows:
1. Shuffle the dataset randomly.
2. Split the dataset into k groups
3. For each unique group:
a. Take the group as a hold out or test data set
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model
4. Summarize the skill of the model using the sample of model evaluation scores

Data Science Cross Validation

There is an alternative in Scikit-Learn called Stratified k fold, in which the split is shuffled to make it sure you have a representative sample of each class and a k fold in which you may not have the assurance of it (not good with a very unbalanced dataset).

What is Machine Learning?

Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data. You select a model to train and then manually perform feature extraction. Used to devise complex models and algorithms that lend themselves to a prediction which in commercial use is known as predictive analytics.

What is Supervised Learning? 

Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples.

Algorithms: Support Vector Machines, Regression, Naive Bayes, Decision Trees, K-nearest Neighbor Algorithm and Neural Networks

Example: If you built a fruit classifier, the labels will be “this is an orange, this is an apple and this is a banana”, based on showing the classifier examples of apples, oranges and bananas.

What is Unsupervised learning?

Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.

Algorithms: Clustering, Anomaly Detection, Neural Networks and Latent Variable Models

Example: In the same example, a fruit clustering will categorize as “fruits with soft skin and lots of dimples”, “fruits with shiny hard skin” and “elongated yellow fruits”.

What are the various Machine Learning algorithms?

Machine Learning Algorithms

What is “Naive” in a Naive Bayes?

Reference: Naive Bayes Classifier on Wikipedia

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable y and dependent feature vector X1through Xn:

Machine Learning Algorithms Naive Bayes
Machine Learning Algorithms Naive Bayes

What is PCA (Principal Component Analysis)? When do you use it?

Reference: PCA on wikipedia

Principal component analysis (PCA) is a statistical method used in Machine Learning. It consists in projecting data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension.

The process works as following. We define a matrix A with > rows (the single observations of a dataset – in a tabular format, each single row) and @ columns, our features. For this matrix we construct a variable space with as many dimensions as there are features. Each feature represents one coordinate axis. For each feature, the length has been standardized according to a scaling criterion, normally by scaling to unit variance. It is determinant to scale the features to a common scale, otherwise the features with a greater magnitude will weigh more in determining the principal components. Once plotted all the observations and computed the mean of each variable, that mean will be represented by a point in the center of our plot (the center of gravity). Then, we subtract each observation with the mean, shifting the coordinate system with the center in the origin. The best fitting line resulting is the line that best accounts for the shape of the point swarm. It represents the maximum variance direction in the data. Each observation may be projected onto this line in order to get a coordinate value along the PC-line. This value is known as a score. The next best-fitting line can be similarly chosen from directions perpendicular to the first.
Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components.

Machine Learning Algorithms PCA

PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.

SVM (Support Vector Machine)  algorithm

Reference: SVM on wikipedia

Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of supportvector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So, we
choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. The best hyper plane that divides the data is H3.

  • SVMs are helpful in text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
  • Some methods for shallow semantic parsing are based on support vector machines.
  • Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.
  • Classification of satellite data like SAR data using supervised SVM.
  • Hand-written characters can be recognized using SVM.

What are the support vectors in SVM? 

Machine Learning Algorithms Support Vectors

In the diagram, we see that the sketched lines mark the distance from the classifier (the hyper plane) to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin.

To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, max (0, 1 – yi(w∙ xi − b)). This function is zero if x lies on the correct side of the margin. For data on the wrong side of the margin, the function’s value is proportional to the distance from the margin. 

What are the different kernels in SVM?

There are four types of kernels in SVM.
1. LinearKernel
2. Polynomial kernel
3. Radial basis kernel
4. Sigmoid kernel

What are the most known ensemble algorithms? 

Reference: Ensemble Algorithms

The most popular trees are: AdaBoost, Random Forest, and  eXtreme Gradient Boosting (XGBoost).

AdaBoost is best used in a dataset with low noise, when computational complexity or timeliness of results is not a main concern and when there are not enough resources for broader hyperparameter tuning due to lack of time and knowledge of the user.

Random forests should not be used when dealing with time series data or any other data where look-ahead bias should be avoided, and the order and continuity of the samples need to be ensured. This algorithm can handle noise relatively well, but more knowledge from the user is required to adequately tune the algorithm compared to AdaBoost.

The main advantages of XGBoost is its lightning speed compared to other algorithms, such as AdaBoost, and its regularization parameter that successfully reduces variance. But even aside from the regularization parameter, this algorithm leverages a learning rate (shrinkage) and subsamples from the features like random forests, which increases its ability to generalize even further. However, XGBoost is more difficult to understand, visualize and to tune compared to AdaBoost and random forests. There is a multitude of hyperparameters that can be tuned to increase performance.

What is Deep Learning?

Deep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the neurons in the human brain.

Deep Learning

What is the difference between machine learning and deep learning?

Deep learning & Machine learning: what’s the difference?

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorized in the following four categories.
1. Supervised machine learning,
2. Semi-supervised machine learning,
3. Unsupervised machine learning,
4. Reinforcement learning.

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

Machine Learning vs Deep Learning

• The main difference between deep learning and machine learning is due to the way data is
presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks).

• Machine learning algorithms are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets. However, when the result is incorrect, there is a need to “teach them”. Because machine learning algorithms require bulleted data, they are not suitable for solving complex queries that involve a huge amount of data.

• Deep learning networks do not require human intervention, as multilevel layers in neural
networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes. However, even they can be wrong if the data quality is not good enough.

• Data decides everything. It is the quality of the data that ultimately determines the quality of the result.

• Both of these subsets of AI are somehow connected to data, which makes it possible to represent a certain form of “intelligence.” However, you should be aware that deep learning requires much more data than a traditional machine learning algorithm. The reason for this is that deep learning networks can identify different elements in neural network layers only when more than a million data points interact. Machine learning algorithms, on the other hand, are capable of learning by pre-programmed criteria.

What is the reason for the popularity of Deep Learning in recent times? 

Now although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons:
• The increase in the amount of data generated through various sources
• The growth in hardware resources required to run these models
GPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously

What is reinforcement learning?

Reinforcement Learning allows to take actions to max cumulative reward. It learns by trial and error through reward/penalty system. Environment rewards agent so by time agent makes better decisions.
Ex: robot=agent, maze=environment. Used for complex tasks (self-driving cars, game AI).

RL is a series of time steps in a Markov Decision Process:

1. Environment: space in which RL operates
2. State: data related to past action RL took
3. Action: action taken
4. Reward: number taken by agent after last action
5. Observation: data related to environment: can be visible or partially shadowed

What are Artificial Neural Networks?

Artificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input, so the network generates the best possible result without needing to redesign the output criteria.

Artificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.

Machine Learning Artificial Neural Network

How Are Weights Initialized in a Network?

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

What Is the Cost Function? 

Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions.
The most known one is the mean sum of squared errors.

Machine Learning Cost Function

What Are Hyperparameters?

With neural networks, you’re usually working with hyperparameters once the data is formatted correctly.
A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, batches, etc.).

What Will Happen If the Learning Rate is Set inaccurately (Too Low or Too High)? 

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.
If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

What Is The Difference Between Epoch, Batch, and Iteration in Deep Learning? 

Epoch – Represents one iteration over the entire dataset (everything put into the training model).
Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.
Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).

What Are the Different Layers on CNN?

Reference: Layers of CNN 

Machine Learning Layers of CNN

The Convolutional neural networks are regularized versions of multilayer perceptron (MLP). They were developed based on the working of the neurons of the animal visual cortex.

The objective of using the CNN:

The idea is that you give the computer this array of numbers and it will output numbers that describe the probability of the image being a certain class (.80 for a cat, .15 for a dog, .05 for a bird, etc.). It works similar to how our brain works. When we look at a picture of a dog, we can classify it as such if the picture has identifiable features such as paws or 4 legs. In a similar way, the computer is able to perform image classification by looking for low-level features such as edges and curves and then building up to more abstract concepts through a series of convolutional layers. The computer uses low-level features obtained at the initial levels to generate high-level features such as paws or eyes to identify the object.

There are four layers in CNN:
1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.
2. Activation Layer (ReLU Layer) – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map. It follows each convolutional layer.
3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map. Stride = how much you slide, and you get the max of the n x n matrix
4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.

Q60: What Is Pooling on CNN, and How Does It Work?

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

What are Recurrent Neural Networks (RNNs)? 

Reference: RNNs

RNNs are a type of artificial neural networks designed to recognize the pattern from the sequence of data such as Time series, stock market and government agencies etc.

Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a ‘series’ type input with no predetermined size.

Machine Learning RNN

RNNs are designed to take a series of input with no predetermined limit on size. One could ask what’s\ the big deal, I can call a regular NN repeatedly too?

Machine Learning Regular NN

Sure can, but the ‘series’ part of the input means something. A single input item from the series is related to others and likely has an influence on its neighbors. Otherwise it’s just “many” inputs, not a “series” input (duh!).
Recurrent Neural Network remembers the past and its decisions are influenced by what it has learnt from the past. Note: Basic feed forward networks “remember” things too, but they remember things they learnt during training. For example, an image classifier learns what a “1” looks like during training and then uses that knowledge to classify things in production.
While RNNs learn similarly while training, in addition, they remember things learnt from prior input(s) while generating output(s). RNNs can take one or more input vectors and produce one or more output vectors and the output(s) are influenced not just by weights applied on inputs like a regular NN, but also by a “hidden” state vector representing the context based on prior input(s)/output(s). So, the same input could produce a different output depending on previous inputs in the series.

Machine Learning Vanilla NN

In summary, in a vanilla neural network, a fixed size input vector is transformed into a fixed size output vector. Such a network becomes “recurrent” when you repeatedly apply the transformations to a series of given input and produce a series of output vectors. There is no pre-set limitation to the size of the vector. And, in addition to generating the output which is a function of the input and hidden state, we update the hidden state itself based on the input and use it in processing the next input.

What is the role of the Activation Function?

The Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.

Machine Learning libraries for various purposes

Machine Learning Libraries

What is an Auto-Encoder?

Reference: Auto-Encoder

Auto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabeled input which is then encoded to reconstruct the input. 

An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties.
Autoencoders are effectively used for solving many applied problems, from face recognition to acquiring the semantic meaning of words.

Machine Learning Auto_Encoder

What is a Boltzmann Machine?

Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimize the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.

Machine Learning Boltzmann Machine

What Is Dropout and Batch Normalization?

Dropout is a technique of dropping out hidden and visible nodes of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network. It used to avoid overfitting, as it increases the capacity of generalization.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one

Why Is TensorFlow the Most Preferred Library in Deep Learning?

TensorFlow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and PyTorch. TensorFlow supports both CPU and GPU computing devices.

What is Tensor in TensorFlow?

A tensor is a mathematical object represented as arrays of higher dimensions. Think of a n-D matrix. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

What is the Computational Graph?

Everything in a TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates. Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

What is logistic regression?

• Logistic Regression models a function of the target variable as a linear combination of the predictors, then converts this function into a fitted value in the desired range.

• Binary or Binomial Logistic Regression can be understood as the type of Logistic Regression that deals with scenarios wherein the observed outcomes for dependent variables can be only in binary, i.e., it can have only two possible types.

• Multinomial Logistic Regression works in scenarios where the outcome can have more than two possible types – type A vs type B vs type C – that are not in any particular order.

No alternative text description for this image

No alternative text description for this image

Credit:

How is logistic regression done? 

Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).

Explain the steps in making a decision tree. 

1. Take the entire data set as input
2. Calculate entropy of the target variable, as well as the predictor attributes
3. Calculate your information gain of all attributes (we gain information on sorting different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch is finalized
For example, let’s say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:

Machine Learning Decision Tree

It is clear from the decision tree that an offer is accepted if:
• Salary is greater than $50,000
• The commute is less than an hour
• Coffee is offered

How do you build a random forest model?

A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.

Steps to build a random forest model:

1. Randomly select ; features from a total of = features where  k<< m
2. Among the ; features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for > times to create > number of trees

Differentiate between univariate, bivariate, and multivariate analysis. 

Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.

Machine Learning Univariate Data

The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.

Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.

Bivariate data

Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.

Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.

Example: data for house price prediction
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.

What are the feature selection methods used to select the right variables?

There are two main methods for feature selection.
Filter Methods
This involves:
• Linear discrimination analysis
• ANOVA
• Chi-Square
The best analogy for selecting features is “bad data in, bad answer out.” When we’re limiting or selecting the features, it’s all about cleaning up the data coming in.

Wrapper Methods
This involves:
• Forward Selection: We test one feature at a time and keep adding them until we get a good fit
• Backward Selection: We test all the features and start removing them to see what works
better
• Recursive Feature Elimination: Recursively looks through all the different features and how they pair together

Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.

You are given a data set consisting of variables with more than 30 percent missing values. How will you deal with them? 

If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.

For smaller data sets, we can impute missing values with the mean, median, or average of the rest of the data using pandas data frame in python. There are different ways to do so, such as: df.mean(), df.fillna(mean)

Other option of imputation is using KNN for numeric or classification values (as KNN just uses k closest values to impute the missing value).

How will you calculate the Euclidean distance in Python?

plot1 = [1,3]

plot2 = [2,5]

The Euclidean distance can be calculated as follows:

euclidean_distance = sqrt((plot1[0]-plot2[0])**2 + (plot1[1]- plot2[1])**2)

What are dimensionality reduction and its benefits? 

Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.

This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there’s no point in storing a value in two different units (meters and inches).

How should you maintain a deployed model?

The steps to maintain a deployed model are (CREM):

1. Monitor: constant monitoring of all models is needed to determine their performance accuracy.
When you change something, you want to figure out how your changes are going to affect things.
This needs to be monitored to ensure it’s doing what it’s supposed to do.
2. Evaluate: evaluation metrics of the current model are calculated to determine if a new algorithm is needed.
3. Compare: the new models are compared to each other to determine which model performs the best.
4. Rebuild: the best performing model is re-built on the current state of data.

How can a time-series data be declared as stationery?

  1. The mean of the series should not be a function of time.
Machine Learning Stationery Time Series Data: Mean
  1. The variance of the series should not be a function of time. This property is known as homoscedasticity.
Machine Learning Stationery Time Series Data: Variance
  1. The covariance of the i th term and the (i+m) th term should not be a function of time.
Machine Learning Stationery Time Series Data: CoVariance

‘People who bought this also bought…’ recommendations seen on Amazon are a result of which algorithm?

The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.

What is a Generative Adversarial Network?

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine. The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.
The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

Machine Learning GAN illustration

• There is a noise vector coming into the forger who is generating fake wine.
• Here the forger acts as a Generator.
• The shop owner acts as a Discriminator.
• The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine.
The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:
1. Generator
2. Discriminator

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images. The ultimate aim is to make the discriminator learn to identify real and fake images.

You are given a dataset on cancer detection. You have built a classification model and achieved an accuracy of 96 percent. Why shouldn’t you be happy with your model performance? What can you do about it?

Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection and can greatly improve a patient’s prognosis.

Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.

We want to predict the probability of death from heart disease based on three risk factors: age, gender, and blood cholesterol level. What is the most appropriate algorithm for this case?

The most appropriate algorithm for this case is logistic regression.

After studying the behavior of a population, you have identified four specific individual types that are valuable to your study. You would like to find all users who are most similar to each individual type. Which algorithm is most appropriate for this study? 

As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering is the most appropriate algorithm for this study.

You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be relevant. What else must be true? 

{grape, apple} must be a frequent itemset.

Your organization has a website where visitors randomly receive one of two coupons. It is also possible that visitors to the website will not receive a coupon. You have been asked to determine if offering a coupon to website visitors has any impact on their purchase decisions. Which analysis method should you use?

One-way ANOVA: in statistics, one-way analysis of variance is a technique that can be used to compare means of two or more samples. This technique can be used only for numerical response data, the “Y”, usually one variable, and numerical or categorical input data, the “X”, always one variable, hence “oneway”.
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit
theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.

What are the feature vectors?

A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that’s easy to analyze.

What is root cause analysis?

Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.

Do gradient descent methods always converge to similar points?

They do not, because in some cases, they reach a local minimum or a local optimum point. You would not reach the global optimum point. This is governed by the data and the starting conditions.

 In your choice of language, write a program that prints the numbers ranging from one to 50. But for multiples of three, print “Fizz” instead of the number and for the multiples of five, print “Buzz.” For numbers which are multiples of both three and five, print “FizzBuzz.”

Python Fibonacci algorithm

What are the different Deep Learning Frameworks?

PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.
TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team.
Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.

Data Sciences and Data Mining Glossary

Credit: Dr. Matthew North
Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Archived Data: Data which have been copied out of a live production database and into a data warehouse or other permanent system where they can be accessed and analyzed, but not by primary operational business systems.

Association Rules: A data mining methodology which compares attributes in a data set across all observations to identify areas where two or more attributes are frequently found together. If their frequency of coexistence is high enough throughout the data set, the association of those attributes can be said to be a rule.

Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values.

Binomial: A data type for any set of values that is limited to one of two numeric options.

Binominal: In RapidMiner, the data type binominal is used instead of binomial, enabling both numerical and character-based sets of values that are limited to one of two options.

Business Understanding: See Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Case Sensitive: A situation where a computer program recognizes the uppercase version of a letter or word as being different from the lowercase version of the same letter or word.

Classification: One of the two main goals of conducting data mining activities, with the other being prediction. Classification creates groupings in a data set based on the similarity of the observations’ attributes. Some data mining methodologies, such as decision trees, can predict an observation’s classification.

Code: Code is the result of a computer worker’s work. It is a set of instructions, typed in a specific grammar and syntax, that a computer can understand and execute. According to Lawrence Lessig, it is one of four methods humans can use to set and control boundaries for behavior when interacting with computer systems.

Coefficient: In data mining, a coefficient is a value that is calculated based on the values in a data set that can be used as a multiplier or as an indicator of the relative strength of some attribute or component in a data mining model.

Column: See Attribute. In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Comma Separated Values (CSV): A common text-based format for data sets where the divisions between attributes (columns of data) are indicated by commas. If commas occur naturally in some of the values in the data set, a CSV file will misunderstand these to be attribute separators, leading to misalignment of attributes.

Conclusion: See Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Confidence (Alpha) Level: A value, usually 5% or 0.05, used to test for statistical significance in some data mining methods. If statistical significance is found, a data miner can say that there is a 95% likelihood that a calculated or predicted value is not a false positive.

Confidence Percent: In predictive data mining, this is the percent of calculated confidence that the model has calculated for one or more possible predicted values. It is a measure for the likelihood of false positives in predictions. Regardless of the number of possible predicted values, their collective confidence percentages will always total to 100%.

Consequent: In an association rules data mining model, the consequent is the attribute which results from the antecedent in an identified rule. If an association rule were characterized as “If this, then that”, the consequent would be that—in other words, the outcome.

Correlation: A statistical measure of the strength of affinity, based on the similarity of observational values, of the attributes in a data set. These can be positive (as one attribute’s values go up or down, so too does the correlated attribute’s values); or negative (correlated attributes’ values move in opposite directions). Correlations are indicated by coefficients which fall on a scale between -1 (complete negative correlation) and 1 (complete positive correlation), with 0 indicating no correlation at all between two attributes.

CRISP-DM: An acronym for Cross-Industry Standard Process for Data Mining. This process was jointly developed by several major multi-national corporations around the turn of the new millennium in order to standardize the approach to mining data. It is comprised of six cyclical steps: Business (Organizational) Understanding, Data Understanding, Data Preparation, Modeling, Evaluation, Deployment.

Cross-validation: A method of statistically evaluating a training data set for its likelihood of producing false positives in a predictive data mining model.

Data: Data are any arrangement and compilation of facts. Data may be structured (e.g. arranged in columns (attributes) and rows (observations)), or unstructured (e.g. paragraphs of text, computer log file).

Data Analysis: The process of examining data in a repeatable and structured way in order to extract meaning, patterns or messages from a set of data.

Data Mart: A location where data are stored for easy access by a broad range of people in an organization. Data in a data mart are generally archived data, enabling analysis in a setting that does not impact live operations.

Data Mining: A computational process of analyzing data sets, usually large in nature, using both statistical and logical methods, in order to uncover hidden, previously unknown, and interesting patterns that can inform organizational decision making.

Data Preparation: The third in the six steps of CRISP-DM. At this stage, the data miner ensures that the data to be mined are clean and ready for mining. This may include handling outliers or other inconsistent data, dealing with missing values, reducing attributes or observations, setting attribute roles for modeling, etc.

Data Set: Any compilation of data that is suitable for analysis.

Data Type: In a data set, each attribute is assigned a data type based on the kind of data stored in the attribute. There are many data types which can be generalized into one of three areas: Character (Text) based; Numeric; and Date/Time. Within these categories, RapidMiner has several data types. For example, in the Character area, RapidMiner has Polynominal, Binominal, etc.; and in the Numeric area it has Real, Integer, etc.

Data Understanding: The second in the six steps of CRISP-DM. At this stage, the data miner seeks out sources of data in the organization, and works to collect, compile, standardize, define and document the data. The data miner develops a comprehension of where the data have come from, how they were collected and what they mean.

Data Warehouse: A large-scale repository for archived data which are available for analysis. Data in a data warehouse are often stored in multiple formats (e.g. by week, month, quarter and year), facilitating large scale analyses at higher speeds. The data warehouse is populated by extracting data from operational systems so that analyses do not interfere with live business operations.

Database: A structured organization of facts that is organized such that the facts can be reliably and repeatedly accessed. The most common type of database is a relational database, in which facts (data) are arranged in tables of columns and rows. The data are then accessed using a query language, usually SQL (Structured Query Language), in order to extract meaning from the tables.

Decision Tree: A data mining methodology where leaves and nodes are generated to construct a predictive tree, whereby a data miner can see the attributes which are most predictive of each possible outcome in a target (label) attribute.

Denormalization: The process of removing relational organization from data, reintroducing redundancy into the data, but simultaneously eliminating the need for joins in a relational database, enabling faster querying.

Dependent Variable (Attribute): The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Deployment: The sixth and final of the six steps of CRISP-DM. At this stage, the data miner takes the results of data mining activities and puts them into practice in the organization. The data miner watches closely and collects data to determine if the deployment is successful and ethical. Deployment can happen in stages, such as through pilot programs before a full-scale roll out.

Descartes’ Rule of Change: An ethical framework set forth by Rene Descartes which states that if an action cannot be taken repeatedly, it cannot be ethically taken even once.

Design Perspective: The view in RapidMiner where a data miner adds operators to a data mining stream, sets those operators’ parameters, and runs the model.

Discriminant Analysis: A predictive data mining model which attempts to compare the values of all observations across all attributes and identify where natural breaks occur from one category to another, and then predict which category each observation in the data set will fall into.

Ethics: A set of moral codes or guidelines that an individual develops to guide his or her decision making in order to make fair and respectful decisions and engage in right actions. Ethical standards are higher than legally required minimums.

Evaluation: The fifth of the six steps of CRISP-DM. At this stage, the data miner reviews the results of the data mining model, interprets results and determines how useful they are. He or she may also conduct an investigation into false positives or other potentially misleading results.

False Positive: A predicted value that ends up not being correct.

Field: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

Frequency Pattern: A recurrence of the same, or similar, observations numerous times in a single data set.

Fuzzy Logic: A data mining concept often associated with neural networks where predictions are made using a training data set, even though some uncertainty exists regarding the data and a model’s predictions.

Gain Ratio: One of several algorithms used to construct decision tree models.

Gini Index: An algorithm created by Corrodo Gini that can be used to generate decision tree models.

Heterogeneity: In statistical analysis, this is the amount of variety found in the values of an attribute.

Inconsistent Data: These are values in an attribute in a data set that are out-of-the-ordinary among the whole set of values in that attribute. They can be statistical outliers, or other values that simply don’t make sense in the context of the ‘normal’ range of values for the attribute. They are generally replaced or remove during the Data Preparation phase of CRISP-DM.

Independent Variable (Attribute): These are attributes that act on the dependent attribute (the target, or label). They are used to help predict the label in a predictive model.

Jittering: The process of adding a small, random decimal to discrete values in a data set so that when they are plotted in a scatter plot, they are slightly apart from one another, enabling the analyst to better see clustering and density.

Join: The process of connecting two or more tables in a relational database together so that their attributes can be accessed in a single query, such as in a view.

Kant’s Categorical Imperative: An ethical framework proposed by Immanuel Kant which states that if everyone cannot ethically take some action, then no one can ethically take that action.

k-Means Clustering: A data mining methodology that uses the mean (average) values of the attributes in a data set to group each observation into a cluster of other observations whose values are most similar to the mean for that cluster.

Label: In RapidMiner, this is the role that must be set in order to use an attribute as the dependent, or target, attribute in a predictive model.

Laws: These are regulatory statutes which have associated consequences that are established and enforced by a governmental agency. According to Lawrence Lessig, these are one of the four methods for establishing boundaries to define and regulate social behavior.

Leaf: In a decision tree data mining model, this is the terminal end point of a branch, indicating the predicted outcome for observations whose values follow that branch of the tree.

Linear Regression: A predictive data mining method which uses the algebraic formula for calculating the slope of a line in order to predict where a given observation will likely fall along that line.

Logistic Regression: A predictive data mining method which uses a quadratic formula to predict one of a set of possible outcomes, along with a probability that the prediction will be the actual outcome.

Markets: A socio-economic construct in which peoples’ buying, selling, and exchanging behaviors define the boundaries of acceptable or unacceptable behavior. Lawrence Lessig offers this as one of four methods for defining the parameters of appropriate behavior.

Mean: See Average: The arithmetic mean, calculated by summing all values and dividing by the count of the values. 

Median: With the Mean and Mode, this is one of three generally used Measures of Central Tendency. It is an arithmetic way of defining what ‘normal’ looks like in a numeric attribute. It is calculated by rank ordering the values in an attribute and finding the one in the middle. If there are an even number of observations, the two in the middle are averaged to find the median.

Meta Data: These are facts that describe the observational values in an attribute. Meta data may include who collected the data, when, why, where, how, how often; and usually include some descriptive statistics such as the range, average, standard deviation, etc.

Missing Data: These are instances in an observation where one or more attributes does not have a value. It is not the same as zero, because zero is a value. Missing data are like Null values in a database, they are either unknown or undefined. These are usually replaced or removed during the Data Preparation phase of CRISP-DM.

Mode: With Mean and Median, this is one of three common Measures of Central Tendency. It is the value in an attribute which is the most common. It can be numerical or text. If an attribute contains two or more values that appear an equal number of times and more than any other values, then all are listed as the mode, and the attribute is said to be Bimodal or Multimodal.

Model: A computer-based representation of real-life events or activities, constructed upon the basis of data which represent those events.

Name (Attribute): This is the text descriptor of each attribute in a data set. In RapidMiner, the first row of an imported data set should be designated as the attribute name, so that these are not interpreted as the first observation in the data set.

Neural Network: A predictive data mining methodology which tries to mimic human brain processes by comparing the values of all attributes in a data set to one another through the use of a hidden layer of nodes. The frequencies with which the attribute values match, or are strongly similar, create neurons which become stronger at higher frequencies of similarity.

n-Gram: In text mining, this is a combination of words or word stems that represent a phrase that may have more meaning or significance that would the single word or stem.

Node: A terminal or mid-point in decision trees and neural networks where an attribute branches or forks away from other terminal or branches because the values represented at that point have become significantly different from all other values for that attribute.

Normalization: In a relational database, this is the process of breaking data out into multiple related tables in order to reduce redundancy and eliminate multivalued dependencies.

Null: The absence of a value in a database. The value is unrecorded, unknown, or undefined. See Missing Values.

Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Online Analytical Processing (OLAP): A database concept where data are collected and organized in a way that facilitates analysis, rather than practical, daily operational work. Evaluating data in a data warehouse is an example of OLAP. The underlying structure that collects and holds the data makes analysis faster, but would slow down transactional work.

Online Transaction Processing (OLTP): A database concept where data are collected and organized in a way that facilitates fast and repeated transactions, rather than broader analytical work. Scanning items being purchased at a cash register is an example of OLTP. The underlying structure that collects and holds the data makes transactions faster, but would slow down analysis.

Operational Data: Data which are generated as a result of day-to-day work (e.g. the entry of work orders for an electrical service company).

Operator: In RapidMiner, an operator is any one of more than 100 tools that can be added to a data mining stream in order to perform some function. Functions range from adding a data set, to setting an attribute’s role, to applying a modeling algorithm. Operators are connected into a stream by way of ports connected by splines.

Organizational Data: These are data which are collected by an organization, often in aggregate or summary format, in order to address a specific question, tell a story, or answer a specific question. They may be constructed from Operational Data, or added to through other means such as surveys, questionnaires or tests.

Organizational Understanding: The first step in the CRISP-DM process, usually referred to as Business Understanding, where the data miner develops an understanding of an organization’s goals, objectives, questions, and anticipated outcomes relative to data mining tasks. The data miner must understand why the data mining task is being undertaken before proceeding to gather and understand data.

Parameters: In RapidMiner, these are the settings that control values and thresholds that an operator will use to perform its job. These may be the attribute name and role in a Set Role operator, or the algorithm the data miner desires to use in a model operator.

Port: The input or output required for an operator to perform its function in RapidMiner. These are connected to one another using splines.

Prediction: The target, or label, or dependent attribute that is generated by a predictive model, usually for a scoring data set in a model.

Premise: See Antecedent: In an association rules data mining model, the antecedent is the attribute which precedes the consequent in an identified rule. Attribute order makes a difference when calculating the confidence percentage, so identifying which attribute comes first is necessary even if the reciprocal of the association is also a rule.

Privacy: The concept describing a person’s right to be let alone; to have information about them kept away from those who should not, or do not need to, see it. A data miner must always respect and safeguard the privacy of individuals represented in the data he or she mines.

Professional Code of Conduct: A helpful guide or documented set of parameters by which an individual in a given profession agrees to abide. These are usually written by a board or panel of experts and adopted formally by a professional organization.

Query: A method of structuring a question, usually using code, that can be submitted to, interpreted, and answered by a computer.

Record: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Relational Database: A computerized repository, comprised of entities that relate to one another through keys. The most basic and elemental entity in a relational database is the table, and tables are made up of attributes. One or more of these attributes serves as a key that can be matched (or related) to a corresponding attribute in another table, creating the relational effect which reduces data redundancy and eliminates multivalued dependencies.

Repository: In RapidMiner, this is the place where imported data sets are stored so that they are accessible for modeling.

Results Perspective: The view in RapidMiner that is seen when a model has been run. It is usually comprised of two or more tabs which show meta data, data in a spreadsheet-like view, and predictions and model outcomes (including graphical representations where applicable).

Role (Attribute): In a data mining model, each attribute must be assigned a role. The role is the part the attribute plays in the model. It is usually equated to serving as an independent variable (regular), or dependent variable (label).

Row: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Sample: A subset of an entire data set, selected randomly or in a structured way. This usually reduces a data set down, allowing models to be run faster, especially during development and proof-of-concept work on a model.

Scoring Data: A data set with the same attributes as a training data set in a predictive model, with the exception of the label. The training data set, with the label defined, is used to create a predictive model, and that model is then applied to a scoring data set possessing the same attributes in order to predict the label for each scoring observation.

Social Norms: These are the sets of behaviors and actions that are generally tolerated and found to be acceptable in a society. According to Lawrence Lessig, these are one of four methods of defining and regulating appropriate behavior.

Spline: In RapidMiner, these lines connect the ports between operators, creating the stream of a data mining model.

Standard Deviation: One of the most common statistical measures of how dispersed the values in an attribute are. This measure can help determine whether or not there are outliers (a common type of inconsistent data) in a data set.

Standard Operating Procedures: These are organizational guidelines that are documented and shared with employees which help to define the boundaries for appropriate and acceptable behavior in the business setting. They are usually created and formally adopted by a group of leaders in the organization, with input from key stakeholders in the organization.

Statistical Significance: In statistically-based data mining activities, this is the measure of whether or not the model has yielded any results that are mathematically reliable enough to be used. Any model lacking statistical significance should not be used in operational decision making.

Stemming: In text mining, this is the process of reducing like-terms down into a single, common token (e.g. country, countries, country’s, countryman, etc. → countr).

Stopwords: In text mining, these are small words that are necessary for grammatical correctness, but which carry little meaning or power in the message of the text being mined. These are often articles, prepositions or conjunctions, such as ‘a’, ‘the’, ‘and’, etc., and are usually removed in the Process Document operator’s sub-process.

Stream: This is the string of operators in a data mining model, connected through the operators’ ports via splines, that represents all actions that will be taken on a data set in order to mine it.

Structured Query Language (SQL): The set of codes, reserved keywords and syntax defined by the American National Standards Institute used to create, manage and use relational databases.

Sub-process: In RapidMiner, this is a stream of operators set up to apply a series of actions to all inputs connected to the parent operator.

Support Percent: In an association rule data mining model, this is the percent of the time that when the antecedent is found in an observation, the consequent is also found. Since this is calculated as the number of times the two are found together divided by the total number of they could have been found together, the Support Percent is the same for reciprocal rules.

Table: In data collection, a table is a grid of columns and rows, where in general, the columns are individual attributes in the data set, and the rows are observations across those attributes. Tables are the most elemental entity in relational databases.

Target Attribute: See Label; Dependent Variable: The attribute in a data set that is being acted upon by the other attributes. It is the thing we want to predict, the target, or label, attribute in a predictive model.

Technology: Any tool or process invented by mankind to do or improve work.

Text Mining: The process of data mining unstructured text-based data such as essays, news articles, speech transcripts, etc. to discover patterns of word or phrase usage to reveal deeper or previously unrecognized meaning.

Token (Tokenize): In text mining, this is the process of turning words in the input document(s) into attributes that can be mined.

Training Data: In a predictive model, this data set already has the label, or dependent variable defined, so that it can be used to create a model which can be applied to a scoring data set in order to generate predictions for the latter.

Tuple: See Observation: A row of data in a data set. It consists of the value assigned to each attribute for one record in the data set. It is sometimes called a tuple in database language.

Variable: See Attribute: In columnar data, an attribute is one column. It is named in the data so that it can be referred to by a model and used in data mining. The term attribute is sometimes interchanged with the terms ‘field’, ‘variable’, or ‘column’.

View: A type of pseudo-table in a relational database which is actually a named, stored query. This query runs against one or more tables, retrieving a defined number of attributes that can then be referenced as if they were in a table in the database. Views can limit users’ ability to see attributes to only those that are relevant and/or approved for those users to see. They can also speed up the query process because although they may contain joins, the key columns for the joins can be indexed and cached, making the view’s query run faster than it would if it were not stored as a view. Views can be useful in data mining as data miners can be given read-only access to the view, upon which they can build data mining models, without having to have broader administrative rights on the database itself.

What is the Central Limit Theorem and why is it important?

An Introduction to the Central Limit Theorem

Answer: Suppose that we are interested in estimating the average height among all people. Collecting data for every person in the world is impractical, bordering on impossible. While we can’t obtain a height measurement from everyone in the population, we can still sample some people. The question now becomes, what can we say about the average height of the entire population given a single sample.
The Central Limit Theorem addresses this question exactly. Formally, it states that if we sample from a population using a sufficiently large sample size, the mean of the samples (also known as the sample population) will be normally distributed (assuming true random sampling), the mean tending to the mean of the population and variance equal to the variance of the population divided by the size of the sampling.
What’s especially important is that this will be true regardless of the distribution of the original population.

Central Limit Theorem
Central Limit Theorem: Population Distribution

As we can see, the distribution is pretty ugly. It certainly isn’t normal, uniform, or any other commonly known distribution. In order to sample from the above distribution, we need to define a sample size, referred to as N. This is the number of observations that we will sample at a time. Suppose that we choose
N to be 3. This means that we will sample in groups of 3. So for the above population, we might sample groups such as [5, 20, 41], [60, 17, 82], [8, 13, 61], and so on.
Suppose that we gather 1,000 samples of 3 from the above population. For each sample, we can compute its average. If we do that, we will have 1,000 averages. This set of 1,000 averages is called a sampling distribution, and according to Central Limit Theorem, the sampling distribution will approach a normal distribution as the sample size N used to produce it increases. Here is what our sample distribution looks like for N = 3.

Simple Mean Distribution with N=3
Simple Mean Distribution with N=3

As we can see, it certainly looks uni-modal, though not necessarily normal. If we repeat the same process with a larger sample size, we should see the sampling distribution start to become more normal. Let’s repeat the same process again with N = 10. Here is the sampling distribution for that sample size.

Sample Mean Distribution with N = 10
Sample Mean Distribution with N = 10

Credit: Steve Nouri

What is bias-variance trade-off?

Bias: Bias is an error introduced in the model due to the oversimplification of the algorithm used (does not fit the data properly). It can lead to under-fitting.
Low bias machine learning algorithms — Decision Trees, k-NN and SVM
High bias machine learning algorithms — Linear Regression, Logistic Regression

Variance: Variance is error introduced in the model due to a too complex algorithm, it performs very well in the training set but poorly in the test set. It can lead to high sensitivity and overfitting.
Possible high variance – polynomial regression

Normally, as you increase the complexity of your model, you will see a reduction in error due to lower bias in the model. However, this only happens until a particular point. As you continue to make your model more complex, you end up over-fitting your model and hence your model will start suffering from high variance.

bias-variance trade-off

Bias-Variance trade-off: The goal of any supervised machine learning algorithm is to have low bias and low variance to achieve good prediction performance.

1. The k-nearest neighbor algorithm has low bias and high variance, but the trade-off can be changed by increasing the value of k which increases the number of neighbors that contribute to the prediction and in turn increases the bias of the model.
2. The support vector machine algorithm has low bias and high variance, but the trade-off can be changed by increasing the C parameter that influences the number of violations of the margin allowed in the training data which increases the bias but decreases the variance.
3. The decision tree has low bias and high variance, you can decrease the depth of the tree or use fewer attributes.
4. The linear regression has low variance and high bias, you can increase the number of features or use another regression that better fits the data.

There is no escaping the relationship between bias and variance in machine learning. Increasing the bias will decrease the variance. Increasing the variance will decrease bias.

The Best Medium-Hard Data Analyst SQL Interview Questions

compiled by Google Data Analyst Zachary Thomas!

The Best Medium-Hard Data Analyst SQL Interview Questions

Self-Join Practice Problems: MoM Percent Change

Context: Oftentimes it’s useful to know how much a key metric, such as monthly active users, changes between months.
Say we have a table logins in the form:

SQL Self-Join Practice Mom Percent Change

Task: Find the month-over-month percentage change for monthly active users (MAU).

Solution:
(This solution, like other solution code blocks you will see in this doc, contains comments about SQL syntax that may differ between flavors of SQL or other comments about the solutions as listed)

SQL MoM Solution2

Tree Structure Labeling with SQL

Context: Say you have a table tree with a column of nodes and a column corresponding parent nodes

Task: Write SQL such that we label each node as a “leaf”, “inner” or “Root” node, such that for the nodes above we get:

A solution which works for the above example will receive full credit, although you can receive extra credit for providing a solution that is generalizable to a tree of any depth (not just depth = 2, as is the case in the example above).

Solution: This solution works for the example above with tree depth = 2, but is not generalizable beyond that.

An alternate solution, that is generalizable to any tree depth:
Acknowledgement: this more generalizable solution was contributed by Fabian Hofmann

An alternate solution, without explicit joins:
Acknowledgement: William Chargin on 5/2/20 noted that WHERE parent IS NOT NULL is needed to make this solution return Leaf instead of NULL.

Retained Users Per Month with SQL

Acknowledgement: this problem is adapted from SiSense’s “Using Self Joins to Calculate Your Retention, Churn, and Reactivation Metrics” blog post

PART 1:
Context: Say we have login data in the table logins:

Task: Write a query that gets the number of retained users per month. In this case, retention for a given month is defined as the number of users who logged in that month who also logged in the immediately previous month.

Solution:

PART 2:

Task: Now we’ll take retention and turn it on its head: Write a query to find how many users last month did not come back this month. i.e. the number of churned users

Solution:

Note that there are solutions to this problem that can use LEFT or RIGHT joins.

PART 3:
Context: You now want to see the number of active users this month who have been reactivated — in other words, users who have churned but this month they became active again. Keep in mind a user can reactivate after churning before the previous month. An example of this could be a user active in February (appears in logins), no activity in March and April, but then active again in May (appears in logins), so they count as a reactivated user for May .

Task: Create a table that contains the number of reactivated users per month.

Solution:

Cumulative Sums with SQL

Acknowledgement: This problem was inspired by Sisense’s “Cash Flow modeling in SQL” blog post
Context: Say we have a table transactions in the form:

Where cash_flow is the revenues minus costs for each day.

Task: Write a query to get cumulative cash flow for each day such that we end up with a table in the form below:

Solution using a window function (more effcient):

Alternative Solution (less efficient):

Rolling Averages with SQL

Acknowledgement: This problem is adapted from Sisense’s “Rolling Averages in MySQL and SQL Server” blog post
Note: there are different ways to compute rolling/moving averages. Here we’ll use a preceding average which means that the metric for the 7th day of the month would be the average of the preceding 6 days and that day itself.
Context: Say we have table signups in the form:

Task: Write a query to get 7-day rolling (preceding) average of daily sign ups

Solution1:

Solution2: (using windows, more efficient)

Multiple Join Conditions in SQL

Acknowledgement: This problem was inspired by Sisense’s “Analyzing Your Email with SQL” blog post
Context: Say we have a table emails that includes emails sent to and from zach@g.com:

Task: Write a query to get the response time per email (id) sent to zach@g.com . Do not include ids that did not receive a response from zach@g.com. Assume each email thread has a unique subject. Keep in mind a thread may have multiple responses back-and-forth between zach@g.com and another email address.

Solution:

SQL Window Function Practice Problems

#1: Get the ID with the highest value
Context: Say we have a table salaries with data on employee salary and department in the following format:

Task: Write a query to get the empno with the highest salary. Make sure your solution can handle ties!

#2: Average and rank with a window function (multi-part)

PART 1:
Context: Say we have a table salaries in the format:

Task: Write a query that returns the same table, but with a new column that has average salary per depname. We would expect a table in the form:

Solution:

PART 2:
Task: Write a query that adds a column with the rank of each employee based on their salary within their department, where the employee with the highest salary gets the rank of 1. We would expect a table in the form:

Solution:

Predictive Modelling Questions

Source:  datasciencehandbook.me

 

1-  (Given a Dataset) Analyze this dataset and give me a model that can predict this response variable. 

2-  What could be some issues if the distribution of the test data is significantly different than the distribution of the training data?

3-  What are some ways I can make my model more robust to outliers?

4-  What are some differences you would expect in a model that minimizes squared error, versus a model that minimizes absolute error? In which cases would each error metric be appropriate?

5- What error metric would you use to evaluate how good a binary classifier is? What if the classes are imbalanced? What if there are more than 2 groups?

6-  What are various ways to predict a binary response variable? Can you compare two of them and tell me when one would be more appropriate? What’s the difference between these? (SVM, Logistic Regression, Naive Bayes, Decision Tree, etc.)

7-  What is regularization and where might it be helpful? What is an example of using regularization in a model?

8-  Why might it be preferable to include fewer predictors over many?

9-  Given training data on tweets and their retweets, how would you predict the number of retweets of a given tweet after 7 days after only observing 2 days worth of data?

10-  How could you collect and analyze data to use social media to predict the weather?

11- How would you construct a feed to show relevant content for a site that involves user interactions with items?

12- How would you design the people you may know feature on LinkedIn or Facebook?

13- How would you predict who someone may want to send a Snapchat or Gmail to?

14- How would you suggest to a franchise where to open a new store?

15- In a search engine, given partial data on what the user has typed, how would you predict the user’s eventual search query?

16- Given a database of all previous alumni donations to your university, how would you predict which recent alumni are most likely to donate?

17- You’re Uber and you want to design a heatmap to recommend to drivers where to wait for a passenger. How would you approach this?

18- How would you build a model to predict a March Madness bracket?

19- You want to run a regression to predict the probability of a flight delay, but there are flights with delays of up to 12 hours that are really messing up your model. How can you address this?

 

Data Analysis Interview Questions

Source:  datasciencehandbook.me

1- (Given a Dataset) Analyze this dataset and tell me what you can learn from it.

2- What is R2? What are some other metrics that could be better than R2 and why?

3- What is the curse of dimensionality?

4- Is more data always better?

5- What are advantages of plotting your data before performing analysis?

6- How can you make sure that you don’t analyze something that ends up meaningless?

7- What is the role of trial and error in data analysis? What is the the role of making a hypothesis before diving in?

8- How can you determine which features are the most important in your model?

9- How do you deal with some of your predictors being missing?

10- You have several variables that are positively correlated with your response, and you think combining all of the variables could give you a good prediction of your response. However, you see that in the multiple linear regression, one of the weights on the predictors is negative. What could be the issue?

11- Let’s say you’re given an unfeasible amount of predictors in a predictive modeling task. What are some ways to make the prediction more feasible?

12- Now you have a feasible amount of predictors, but you’re fairly sure that you don’t need all of them. How would you perform feature selection on the dataset?

13- Your linear regression didn’t run and communicates that there are an infinite number of best estimates for the regression coefficients. What could be wrong?

14- You run your regression on different subsets of your data, and find that in each subset, the beta value for a certain variable varies wildly. What could be the issue here?

15- What is the main idea behind ensemble learning? If I had many different models that predicted the same response variable, what might I want to do to incorporate all of the models? Would you expect this to perform better than an individual model or worse?

16- Given that you have wifi data in your office, how would you determine which rooms and areas are underutilized and overutilized?

17- How could you use GPS data from a car to determine the quality of a driver?

18- Given accelerometer, altitude, and fuel usage data from a car, how would you determine the optimum acceleration pattern to drive over hills?

19- Given position data of NBA players in a season’s games, how would you evaluate a basketball player’s defensive ability?

20- How would you quantify the influence of a Twitter user?

21- Given location data of golf balls in games, how would construct a model that can advise golfers where to aim?

22- You have 100 mathletes and 100 math problems. Each mathlete gets to choose 10 problems to solve. Given data on who got what problem correct, how would you rank the problems in terms of difficulty?

23- You have 5000 people that rank 10 sushis in terms of saltiness. How would you aggregate this data to estimate the true saltiness rank in each sushi?

24-Given data on congressional bills and which congressional representatives co-sponsored the bills, how would you determine which other representatives are most similar to yours in voting behavior? How would you evaluate who is the most liberal? Most republican? Most bipartisan?

25- How would you come up with an algorithm to detect plagiarism in online content?

26- You have data on all purchases of customers at a grocery store. Describe to me how you would program an algorithm that would cluster the customers into groups. How would you determine the appropriate number of clusters to include?

27- Let’s say you’re building the recommended music engine at Spotify to recommend people music based on past listening history. How would you approach this problem?

28- Explain how boosted tree models work in simple language.

29- What sort of data sampling techniques would you use for a low signal temporal classification problem?

30- How would you deal with categorical variables and what considerations would you keep in mind?

31- How would you identify leakage in your machine learning model?

32- How would you apply a machine learning model in a live experiment?

33- What is difference between sensitivity, precision and recall? When would you use these over accuracy, name a few situations

34- What’s the importance of train, val, test splits and how would you split or create your dataset – how would this impact your model metrics?

35- What are some simple ways to optimise your model and how would you know you’ve reached a stable and performant model?

Statistical Inference Interview Questions

Source:  datasciencehandbook.me

1- In an A/B test, how can you check if assignment to the various buckets was truly random?

2- What might be the benefits of running an A/A test, where you have two buckets who are exposed to the exact same product?

3- What would be the hazards of letting users sneak a peek at the other bucket in an A/B test?

4- What would be some issues if blogs decide to cover one of your experimental groups?

5- How would you conduct an A/B test on an opt-in feature?

6- How would you run an A/B test for many variants, say 20 or more?

7- How would you run an A/B test if the observations are extremely right-skewed?

8- I have two different experiments that both change the sign-up button to my website. I want to test them at the same time. What kinds of things should I keep in mind?

9- What is a p-value? What is the difference between type-1 and type-2 error?

10- You are AirBnB and you want to test the hypothesis that a greater number of photographs increases the chances that a buyer selects the listing. How would you test this hypothesis?

11- How would you design an experiment to determine the impact of latency on user engagement?

12- What is maximum likelihood estimation? Could there be any case where it doesn’t exist?

13- What’s the difference between a MAP, MOM, MLE estimator? In which cases would you want to use each?

14- What is a confidence interval and how do you interpret it?

15- What is unbiasedness as a property of an estimator? Is this always a desirable property when performing inference? What about in data analysis or predictive modeling?

Product Metric Interview Questions

Source:  datasciencehandbook.me

1- What would be good metrics of success for an advertising-driven consumer product? (Buzzfeed, YouTube, Google Search, etc.) A service-driven consumer product? (Uber, Flickr, Venmo, etc.)

2- What would be good metrics of success for a productivity tool? (Evernote, Asana, Google Docs, etc.) A MOOC? (edX, Coursera, Udacity, etc.)

3- What would be good metrics of success for an e-commerce product? (Etsy, Groupon, Birchbox, etc.) A subscription product? (Netflix, Birchbox, Hulu, etc.) Premium subscriptions? (OKCupid, LinkedIn, Spotify, etc.)

4- What would be good metrics of success for a consumer product that relies heavily on engagement and interaction? (Snapchat, Pinterest, Facebook, etc.) A messaging product? (GroupMe, Hangouts, Snapchat, etc.)

5- What would be good metrics of success for a product that offered in-app purchases? (Zynga, Angry Birds, other gaming apps)

6- A certain metric is violating your expectations by going down or up more than you expect. How would you try to identify the cause of the change?

7- Growth for total number of tweets sent has been slow this month. What data would you look at to determine the cause of the problem?

8- You’re a restaurant and are approached by Groupon to run a deal. What data would you ask from them in order to determine whether or not to do the deal?

9- You are tasked with improving the efficiency of a subway system. Where would you start?

10- Say you are working on Facebook News Feed. What would be some metrics that you think are important? How would you make the news each person gets more relevant?

11- How would you measure the impact that sponsored stories on Facebook News Feed have on user engagement? How would you determine the optimum balance between sponsored stories and organic content on a user’s News Feed?

12- You are on the data science team at Uber and you are asked to start thinking about surge pricing. What would be the objectives of such a product and how would you start looking into this?

13- Say that you are Netflix. How would you determine what original series you should invest in and create?

14- What kind of services would find churn (metric that tracks how many customers leave the service) helpful? How would you calculate churn?

15- Let’s say that you’re are scheduling content for a content provider on television. How would you determine the best times to schedule content?

Programming Questions

Source:  datasciencehandbook.me

1- Write a function to calculate all possible assignment vectors of 2n users, where n users are assigned to group 0 (control), and n users are assigned to group 1 (treatment).

2- Given a list of tweets, determine the top 10 most used hashtags.

3- Program an algorithm to find the best approximate solution to the knapsack problem1 in a given time.

4- Program an algorithm to find the best approximate solution to the travelling salesman problem2 in a given time.

5- You have a stream of data coming in of size n, but you don’t know what n is ahead of time. Write an algorithm that will take a random sample of k elements. Can you write one that takes O(k) space?

6- Write an algorithm that can calculate the square root of a number.

7- Given a list of numbers, can you return the outliers?

8- When can parallelism make your algorithms run faster? When could it make your algorithms run slower?

9- What are the different types of joins? What are the differences between them?

10- Why might a join on a subquery be slow? How might you speed it up?

11- Describe the difference between primary keys and foreign keys in a SQL database.

12- Given a COURSES table with columns course_id and course_name, a FACULTY table with columns faculty_id and faculty_name, and a COURSE_FACULTY table with columns faculty_id and course_id, how would you return a list of faculty who teach a course given the name of a course?

13- Given a IMPRESSIONS table with ad_id, click (an indicator that the ad was clicked), and date, write a SQL query that will tell me the click-through-rate of each ad by month.

14- Write a query that returns the name of each department and a count of the number of employees in each:
EMPLOYEES containing: Emp_ID (Primary key) and Emp_Name
EMPLOYEE_DEPT containing: Emp_ID (Foreign key) and Dept_ID (Foreign key)
DEPTS containing: Dept_ID (Primary key) and Dept_Name

Probability Questions

1- Bobo the amoeba has a 25%, 25%, and 50% chance of producing 0, 1, or 2 offspring, respectively. Each of Bobo’s descendants also have the same probabilities. What is the probability that Bobo’s lineage dies out?

2- In any 15-minute interval, there is a 20% probability that you will see at least one shooting star. What is the probability that you see at least one shooting star in the period of an hour?

3- How can you generate a random number between 1 – 7 with only a die?

4- How can you get a fair coin toss if someone hands you a coin that is weighted to come up heads more often than tails?

5- You have an 50-50 mixture of two normal distributions with the same standard deviation. How far apart do the means need to be in order for this distribution to be bimodal?

6- Given draws from a normal distribution with known parameters, how can you simulate draws from a uniform distribution?

7- A certain couple tells you that they have two children, at least one of which is a girl. What is the probability that they have two girls?

8- You have a group of couples that decide to have children until they have their first girl, after which they stop having children. What is the expected gender ratio of the children that are born? What is the expected number of children each couple will have?

9- How many ways can you split 12 people into 3 teams of 4?

10- Your hash function assigns each object to a number between 1:10, each with equal probability. With 10 objects, what is the probability of a hash collision? What is the expected number of hash collisions? What is the expected number of hashes that are unused.

11- You call 2 UberX’s and 3 Lyfts. If the time that each takes to reach you is IID, what is the probability that all the Lyfts arrive first? What is the probability that all the UberX’s arrive first?

12- I write a program should print out all the numbers from 1 to 300, but prints out Fizz instead if the number is divisible by 3, Buzz instead if the number is divisible by 5, and FizzBuzz if the number is divisible by 3 and 5. What is the total number of numbers that is either Fizzed, Buzzed, or FizzBuzzed?

13- On a dating site, users can select 5 out of 24 adjectives to describe themselves. A match is declared between two users if they match on at least 4 adjectives. If Alice and Bob randomly pick adjectives, what is the probability that they form a match?

14- A lazy high school senior types up application and envelopes to n different colleges, but puts the applications randomly into the envelopes. What is the expected number of applications that went to the right college

15- Let’s say you have a very tall father. On average, what would you expect the height of his son to be? Taller, equal, or shorter? What if you had a very short father?

16- What’s the expected number of coin flips until you get two heads in a row? What’s the expected number of coin flips until you get two tails in a row?

17- Let’s say we play a game where I keep flipping a coin until I get heads. If the first time I get heads is on the nth coin, then I pay you 2n-1 dollars. How much would you pay me to play this game?

18- You have two coins, one of which is fair and comes up heads with a probability 1/2, and the other which is biased and comes up heads with probability 3/4. You randomly pick coin and flip it twice, and get heads both times. What is the probability that you picked the fair coin?

19- You have a 0.1% chance of picking up a coin with both heads, and a 99.9% chance that you pick up a fair coin. You flip your coin and it comes up heads 10 times. What’s the chance that you picked up the fair coin, given the information that you observed?

Reference: 800 Data Science Questions & Answers doc by

 

Direct download here

Reference: 164 Data Science Interview Questions and Answers by 365 Data Science

Download it here

DataWarehouse Cheat Sheet

What are Differences between Supervised and Unsupervised Learning?

Supervised UnSupervised
Input data is labelled Input data is unlabeled
Split in training/validation/test No split
Used for prediction Used for analysis
Classification and Regression Clustering, dimension reduction, and density estimation

Python Cheat Sheet

Download it here

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Data Sciences Cheat Sheet

Download it here

Panda Cheat Sheet

Download it here

Learn SQL with Practical Exercises

SQL is definitely one of the most fundamental skills needed to be a data scientist.

This is a comprehensive handbook that can help you to learn SQL (Structured Query Language), which could be directly downloaded here

Credit: D Armstrong

Data Visualization: A comprehensive VIP Matplotlib Cheat sheet

Credit: Matplotlib

Download it here

Power BI for Intermediates

Download it here

Credit: Soheil Bakhshi and Bruce Anderson

How to get a job in data science – a semi-harsh Q/A guide.

Python Frameworks for Data Science

Natural Language Processing (NLP) is one of the top areas today.

Some of the applications are:

  • Reading printed text and correcting reading errors
  • Find and replace
  • Correction of spelling mistakes
  • Development of aids
  • Text summarization
  • Language translation
  • and many more.

NLP is a great area if you are planning to work in the area of artificial intelligence.

High Level Look of AI/ML Algorithms

Best Machine Learning Algorithms for Classification: Pros and Cons

Business Analytics in one image

Curated papers, articles, and blogs on data science & machine learning in production from companies like Google, LinkedIn, Uber, Facebook Twitter, Airbnb, and …

  1. Data Quality
  2. Data Engineering
  3. Data Discovery
  4. Feature Stores
  5. Classification
  6. Regression
  7. Forecasting
  8. Recommendation
  9. Search & Ranking
  10. Embeddings
  11. Natural Language Processing
  12. Sequence Modelling
  13. Computer Vision
  14. Reinforcement Learning
  15. Anomaly Detection
  16. Graph
  17. Optimization
  18. Information Extraction
  19. Weak Supervision
  20. Generation
  21. Audio
  22. Validation and A/B Testing
  23. Model Management
  24. Efficiency
  25. Ethics
  26. Infra
  27. MLOps Platforms
  28. Practices
  29. Team Structure
  30. Fails

How to get a job in data science – a semi-harsh Q/A guide.

HOW DO I GET A JOB IN DATA SCIENCE?

Hey you. Yes you, person asking “how do I get a job in data science/analytics/MLE/AI whatever BS job with data in the title?”. I got news for you. There are two simple rules to getting one of these jobs.

Have experience.

Don’t have no experience.

There are approximately 1000 entry level candidates who think they’re qualified because they did a 24 week bootcamp for every entry level job. I don’t need to be a statistician to tell you your odds of landing one of these aren’t great.

HOW DO I GET EXPERIENCE?

Are you currently employed? If not, get a job. If you are, figure out a way to apply data science in your job, then put it on your resume. Mega bonus points here if you can figure out a way to attribute a dollar value to your contribution. Talk to your supervisor about career aspirations at year-end/mid-year reviews. Maybe you’ll find a way to transfer to a role internally and skip the whole resume ignoring phase. Alternatively, network. Be friends with people who are in the roles you want to be in, maybe they’ll help you find a job at their company.

WHY AM I NOT GETTING INTERVIEWS?

IDK. Maybe you don’t have the required experience. Maybe there are 500+ other people applying for the same position. Maybe your resume stinks. If you’re getting 1/20 response rate, you’re doing great. Quit whining.

IS XYZ DEGREE GOOD FOR DATA SCIENCE?

Does your degree involve some sort of non-remedial math higher than college algebra? Does your degree involve taking any sort of programming classes? If yes, congratulations, your degree will pass most base requirements for data science. Is it the best? Probably not, unless you’re CS or some really heavy math degree where half your classes are taught in Greek letters. Don’t come at me with those art history and underwater basket weaving degrees unless you have multiple years experience doing something else.

SHOULD I DO XYZ BOOTCAMP/MICROMASTERS?

Do you have experience? No? This ain’t gonna help you as much as you think it might. Are you experienced and want to learn more about how data science works? This could be helpful.

SHOULD I DO XYZ MASTER’S IN DATA SCIENCE PROGRAM?

Congratulations, doing a Master’s is usually a good idea and will help make you more competitive as a candidate. Should you shell out 100K for one when you can pay 10K for one online? Probably not. In all likelihood, you’re not gonna get $90K in marginal benefit from the more expensive program. Pick a known school (probably avoid really obscure schools, the name does count for a little) and you’ll be fine. Big bonus here if you can sucker your employer into paying for it.

WILL XYZ CERTIFICATE HELP MY RESUME?

Does your certificate say “AWS” or “AZURE” on it? If not, no.

DO I NEED TO KNOW XYZ MATH TOPIC?

Yes. Stop asking. Probably learn probability, be familiar with linear algebra, and understand what the hell a partial derivative is. Learn how to test hypotheses. Ultimately you need to know what the heck is going on math-wise in your predictions otherwise the company is going to go bankrupt and it will be all your fault.

WHAT IF I’M BAD AT MATH?

Do some studying or something. MIT opencourseware has a bunch of free recorded math classes. If you want to learn some Linear Algebra, Gilbert Strang is your guy.

WHAT PROGRAMMING LANGUAGES SHOULD I LEARN?

STOP ASKING THIS QUESTION. I CAN GOOGLE “HOW TO BE A DATA SCIENTIST” AND EVERY SINGLE GARBAGE TDS ARTICLE WILL TELL YOU SQL AND PYTHON/R. YOU’RE LUCKY YOU DON’T HAVE TO DEAL WITH THE JOY OF SEGMENTATION FAULTS TO RUN A SIMPLE LINEAR REGRESSION.

SHOULD I LEARN PYTHON OR R?

Both. Python is more widely used and tends to be more general purpose than R. R is better at statistics and data analysis, but is a bit more niche. Take your pick to start, but ultimately you’re gonna want to learn both you slacker.

SHOULD I MAKE A PORTFOLIO?

Yes. And don’t put some BS housing price regression, iris classification, or titanic survival project on it either. Next question.

WHAT SHOULD I DO AS A PROJECT?

IDK what are you interested in? If you say twitter sentiment stock market prediction go sit in the corner and think about what you just said. Every half brained first year student who can pip install sklearn and do model.fit() has tried unsuccessfully to predict the stock market. The efficient market hypothesis is a thing for a reason. There are literally millions of other free datasets out there you have one of the most powerful search engines at your fingertips to go find them. Pick something you’re interested in, find some data, and analyze it.

DO I NEED TO BE GOOD WITH PEOPLE? (courtesy of /u/bikeskata)

Yes! First, when you’re applying, no one wants to work with a weirdo. You should be able to have a basic conversation with people, and they shouldn’t come away from it thinking you’ll follow them home and wear their skin as a suit. Once you get a job, you’ll be interacting with colleagues, and you’ll need them to care about your analysis. Presumably, there are non-technical people making decisions you’ll need to bring in as well. If you can’t explain to a moderately intelligent person why they should care about the thing that took you 3 days (and cost $$$ in cloud computing costs), you probably won’t have your position for long. You don’t need to be the life of the party, but you should be pleasant to be around.

Credit: u/save_the_panda_bears

Why is columnar storage efficient for analytics workloads?

  • Columnar Storage enables better compression ratios and improves table scans for aggregate and complex queries.
  • Is optimized for scanning large data sets and complex analytics queries
  • Enables a data block to store and compress significantly more values for a column compared to row-based storage
  • Eliminates the need to read redundant data by reading only the columns that you include in your query.
  • Offers overall performance benefits that can help eliminate the need to aggregate data into cubes as in some other OLAP systems.

What are the integrated data sources for Amazon Redshift?

  • AWS DMS
  • Amazon DynamoDB
  • AWS Glue
  • Amazon EMR
  • Amazon Kinesis
  • Amazon S3
  • SSH enabled host

How do you interact with Amazon Redshift?

  • AWS management console
  • AWS CLI
  • AWS SDks
  • Amazon Redshift Query API
  • or SQL Client tools that support JDBC and ODBC protocols

How do you bound a set of data points (fitting, data, Mathematica)?

One of the first things you need to do when fitting a model to data is to ensure that all of your data points are within the range of the model. This is known as “bounding” the data points. There are a few different ways to bound data points, but one of the most commonly used methods is to simply discard any data points that are outside of the range of the model. This can be done manually, but it’s often more convenient to use a tool like Mathematica to automate the process. By bounding your data points, you can be sure that your model will fit the data more accurately.

Any good data scientist knows that fitting a model to data is essential to understanding the underlying patterns in that data. But fitting a model is only half the battle; once you’ve fit a model, you need to determine how well it actually fits the data. This is where bounding comes in.

Bounding allows you to assess how well a given set of data points fits within the range of values predicted by a model. It’s a simple concept, but it can be mathematically complex to actually do. Mathematica makes it easy, though, with its built-in function for fitting and bounding data. Just input your data and let Mathematica do the work for you!

In SQ, What is the Difference between DDL, DCL, and DML?

DDL_vs_DCL_vs_DML
DDL_vs_DCL_vs_DML

Data definition language (DDL) refers to the subset of SQL commands that define data structures and objects such as databases, tables, and views. DDL commands include the following:

• CREATE: used to create a new object.

• DROP: used to delete an object.

• ALTER: used to modify an object.

• RENAME: used to rename an object.

• TRUNCATE: used to remove all rows from a table without deleting the table itself.

Data manipulation language (DML) refers to the subset of SQL commands that are used to work with data. DML commands include the following:

• SELECT: used to request records from one or more tables.

• INSERT: used to insert one or more records into a table.

• UPDATE: used to modify the data of one or more records in a table.

• DELETE: used to delete one or more records from a table.

• EXPLAIN: used to analyze and display the expected execution plan of a SQL statement.

• LOCK: used to lock a table from write operations (INSERT, UPDATE, DELETE) and prevent concurrent operations from conflicting with one another.

Data control language (DCL) refers to the subset of SQL commands that are used to configure permissions to objects. DCL commands include:

• GRANT: used to grant access and permissions to a database or object in a database, such as a schema or table.

• REVOKE: used to remove access and permissions from a database or objects in a database

What is Big Data?

“Big Data is high-volume, high-velocity, and/or high-variety Information assets that demand cost-effective, innovative forms of information processing that enable enhanced insight, decision making, and process automation.”

What are the 5 Vs of Big Data?

  • Volume
  • Variety: quality of the data
  • Velocity: nature of time in capturing data
  • Variability: measure of consistency in meaning
  • Veracity

What are typical Use Cases of Big Data?

  • Customer segmentation
  • Marketing spend optimization
  • Financial modeling and forecasting
  • Ad targeting and real-time bidding
  • Clickstream analysis
  • Fraud detection

What are example of Data Sources?

  • Relational Databases
  • NoSQL databases
  • Web servers
  • Mobile phones
  • Tablets
  • Data feeds

What are example of Data Formats?

  • Structures, semi-structured, and unstructured
  • Text
  • Binary
  • Streaming and near real-time
  • Batched

Big Data vs Data Warehouses

Big Data is a concept. 

A data warehouse:

  • can be used with both small and large datasets
  • can be used in a Big Data system

How should you split your data up for loading into the data warehouse?

Use the same number of files as you have slices in your cluster, or a multiple of the number of slices.

Why do tables need to be vacuumed?

When values are deleted from a table, Amazon Redshift does not automatically reclaim the space.

Difference Between Amazon Redshift SQl and PostgreSQL

Amzon Redshift SQL is based on PostgreSQl 8.0.2 but has important implementation differences:

  • COPY is highly specialized to enable loading of data from other AWS services and to facilitate automatic compression.
  • VACUUM reclaims disk spce and re-sorts all rows.
  • Some PostgreSQL features, data types, and functions are not supported in Amazon Redshift.

What is the difference between STL tables and STV tables in Redshift?

STL tables contain log data that has been persisted to disk. STV tables contain snapshots of the current system based on transient, in-memory data that is not persisted to disk-based logs or other tables.

How does code compilation affect query performance in Redshift?

The compiled code is cached and available across sessions to speed up subsequent processing of that query.

What is data redistribution in Redshift?

The process of moving data around the cluster to facilitate a join.

What is Dark Data?

Dark data is data that is collected and stored but never used again.

Amazon EMR vs Amazon Redshift

Amazon EMR vs Amazon Redshift
Amazon EMR vs Amazon Redshift

Amazon Redshift Spectrum is the best of both worlds:

  • Can analyze data directly from Amazon S3, like Amazon EMR does
  • Retains efficient processing of higly complex queries, like Amazon Redhsift does
  • And it’s built-in

Data Analytics Ecosystem on AWS:

Data Analytics Ecosystem on AWS
Data Analytics Ecosystem on AWS

Which tasks must be completed before using Amazon Redshift Spectrum?

  • Define an external schema and create tables.

What can be used as a data store for Amazon Redshift Spectrum?

  • Hive Metastore and AWS Glue.

What is the difference between the audit logging feature in Amazon Redshift and Amazon CloudTrail trails?

Redshift Audit logs contain information about database activities. Amazon CloudTrail trails contain information about service activities.

How can you receive notifications about events in your cluster?

Configure an Amazon SNS topic and choose events to trigger the notification to be sent to topic subscribers.

Where does Amazon Redshift store the snapshots used to backup your cluster?

In Amazon S3 bucket.

Benefits of AWS DAS-C01 and AWS MLS-C01 Certifications:

iOS :  https://apps.apple.com/ca/app/aws-data-analytics-sp-exam-pro/id1604021741

 
 
Benefits of AWS DAS-C01 and AWS MLS-C01 Certifications
Benefits of AWS DAS-C01 and AWS MLS-C01 Certifications

AWS Data analytics DAS-C01 Exam Prep
AWS Data analytics DAS-C01 Exam Prep

Cap Theorem:

Cap Theorem
Cap Theorem

Data Warehouse Definition:

Data Warehouse Definitions
Data Warehouse Definitions

Data Warehouse are Subject-Oriented:

Data Warehouse - Subject-area
Data Warehouse – Subject-area

AWS Analytics Services:

Amazon Elastic MapReduce (Amazon EMR) simplifies big data processing by providing a managed Hadoop framework that makes it easy, fast, and cost-effective for you to distribute and process vast amounts of your data across dynamically scalable Amazon Elastic Compute Cloud (Amazon EC2) instances. You can also run other popular distributed frameworks such as Apache Spark and Presto in Amazon EMR, and interact with data in other AWS data stores, such as Amazon S3 and Amazon DynamoDB.

• Amazon Elasticsearch Service is a managed service that makes it easy to deploy, operate, and scale Elasticsearch in the AWS cloud. Elasticsearch is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and click stream analytics.

• Amazon Kinesis is a platform for streaming data on AWS, that offers powerful services that make it easy to load and analyze streaming data, and that also provides the ability for you to build custom streaming data applications for specialized needs.

• Amazon Machine Learning provides visualization tools and wizards that guide you through the process of creating machine learning (ML) models without having to learn complex ML algorithms and technology. When your models are ready, Amazon Machine Learning makes it easy to obtain predictions for your application using simple APIs, without having to implement custom prediction generation code or manage any infrastructure.

• Amazon QuickSight is a very fast, cloud-powered business intelligence (BI) service that makes it easy for all employees to build visualizations, perform one-time analysis, and quickly get business insights from their data.

AWS Database Services:

AWS Database Services:
AWS Database Services:

Choosing between NoSQL or SQL Databases:

Amazon SQL vs NoSQL
Amazon SQL vs NoSQL

 

Can you give an example of a successful implementation of an enterprise wide data warehouse solution?

1- DataWarehouse Implementation at Phillips U.S. based division

2- Financial Times

“Amazon Redshift is the single source of truth for our user data. It stores data on customer usage, customer service, and advertising, and then presents those data back to the business in multiple views.” –John O’Donovan, CTO, Financial Times

What is explained variation and unexplained variation in linear regression analysis?

In statistics, explained variation measures the proportion to which a mathematical model accounts for the variation (dispersion) of a given data set. Often, variation is quantified as variance; then, the more specific term explained variance can be used.

The explained variation is the sum of the squared of the differences between each predicted y-value and the mean of y. The unexplained variation is the sum of the squared of the differences between the y-value of each ordered pair and each corresponding predicted y-value.

Linear regression is a data science technique used to model the relationships between variables. In a linear regression model, the explained variation is the sum of the squared of the differences between each predicted y-value and the mean of y. The unexplained variation is the sum of the squared of the differences between the y-value of each ordered pair and each corresponding predicted y-value. By understanding both the explained and unexplained variation in a linear regression model, data scientists can better understand the data and make more accurate predictions.

In data science, linear regression is a technique used to model the relationships between explanatory variables and a response variable. The goal of linear regression is to find the line of best fit that minimizes the sum of the squared residuals. The residual is the difference between the actual y-value and the predicted y-value. The overall variation in the data can be partitioned into two components: explained variation and unexplained variation. The explained variation is the sum of the squared of the differences between each predicted y-value and the mean of y. The unexplained variation is the sum of the squared of the differences between the y-value of each ordered pair and each corresponding predicted y-value. In other words, explained variation measures how well the line of best fit explains the data, while unexplained variation measures how much error there is in the predictions. In order to create a model that is both predictive and accurate, data scientists must strive to minimize both explained and unexplained variation.

Normalization and Standardization both are rescaling techniques. They make your data unitless

Assume you have 2 feature F1 and F2.

F1 ranges from 0 – 100 , F2 ranges from 0 to 0.10

when you use the algorithm that uses distance as the measure. you encounter a problem.

F1 F2

20 0.2

26 0.2

20 0.9

row 1 – row 2 : (20 -26) + (0.2–0.2) = 6

row1 – row3 : ( 20–20 ) + (0.2 – 0.9) = 0.7

you may conclude row3 is nearest to row1 but its wrong .

right way of calculation is

row1- row2 : (20–26)/100 + (0.2 – .02)/0.10 = 0.06

row1 – row3 : (20–20)/100 + (0.2–0.9)/0.10 = 7

So row2 is the nearest to row1

Normalization brings data between 0- 1

Standardization brings data between 1 standardization

Normalization = ( X – Xmin) / (Xmax – Xmin)

Standardization = (x – µ ) / σ

Regularization is a concent of underfit and overfit

if an error is more in both train data and test data its underfit

if an error is more in test data and less train data it is overfit

Regularization is the way to manage optimal error. Source:  ABC of Data Science

TensorFlow

Tensorflow is an open-source machine learning library developed at Google for numerical computation using data flow graphs is arguably one of the best, with Gmail, Uber, Airbnb, Nvidia, and lots of other prominent brands using it. It’s handy for creating and experimenting with deep learning architectures, and its formulation is convenient for data integration such as inputting graphs, SQL tables, and images together.

Deepchecks

Deepchecks is a Python package for comprehensively validating your machine learning models and data with minimal effort. This includes checks related to various types of issues, such as model performance, data integrity, distribution mismatches, and more.

Scikit-learn

Scikit-learn is a very popular open-source machine learning library for the Python programming language. With constant updations in the product for efficiency improvements coupled with the fact that its open-source makes it a go-to framework for machine learning in the industry.

Keras

Keras is an open-source neural network library written in Python. It is capable of running on top of other popular lower-level libraries such as Tensorflow, Theano & CNTK. This one might be your new best friend if you have a lot of data and/or you’re after the state-of-the-art in AI: deep learning.

Pandas

Pandas is yet another open-source software library written for the Python programming language for data manipulation and analysis. In particular, it offers data structures and operations for manipulating numerical tables and time series. Pandas works well with incomplete, messy, and unlabeled data and provides tools for shaping, merging, reshaping, and slicing datasets.

Spark MLib

Spark MLib is a popular machine learning library. As per survey, almost 6% of the data scientists use this library. This library has support for Java, Scala, Python, and R. Also you can use this library on Hadoop, Apache Mesos, Kubernetes, and other cloud services against multiple data sources.

PyTorch

PyTorch is developed by Facebook’s artificial intelligence research group and it is the primary software tool for deep learning after Tensorflow. Unlike TensorFlow, the PyTorch library operates with a dynamically updated graph. This means that it allows you to make changes to the architecture in the process. By Niklas Steiner

Whenever we fit a machine learning algorithm to a dataset, we typically split the dataset into three parts:

1. Training Set: Used to train the model.

2. Validation Set: Used to optimize model parameters.

3. Test Set: Used to get an unbiased estimate of the final model performance.

The following diagram provides a visual explanation of these three different types of datasets:

One point of confusion for students is the difference between the validation set and the test set.

In simple terms, the validation set is used to optimize the model parameters while the test set is used to provide an unbiased estimate of the final model.

It can be shown that the error rate as measured by k-fold cross validation tends to underestimate the true error rate once the model is applied to an unseen dataset.

Thus, we fit the final model to the test set to get an unbiased estimate of what the true error rate will be in the real world.

If you looking for solid way of testing your ML algorithms then I would recommend this open-source interactive demo

Source: ABC of Dat Science and ML

The general answer to your question is : When our model needs it !

Yeah, That’s it!

In detail:


  1. When we feel like, the model we are going to use can’t read the format of data we have. We need to normalise the data.

e.g. When our data is in ‘text’ . We perform – Lemmatization, Stemming, etc to normalize/transform it.

2. Another case would be that, When the values in certain columns(features) do not scale with other features, this may lead to poor performance of our model. We need to normalise our data here as well. ( better say, Features have different Ranges).

e.g Features: F1, F2, F3

range( F1) – 0 – 100

range( F2) – 50 – 100

range( F3) – 900 – 10,000

In the above situation, ,the model would give more importance to F3 ( bigger numerical values). and thus, our model would be biased; resulting in a bad accuracy. Here, We need to apply Scaling ( such as : StandarScaler() func in python, etc.)

Transformation, Scaling; these are some common Normalisation methods.

Go through these two articles to have a better understading:

  1. Understand Data Normalization in Machine Learning
  2. Why Data Normalization is necessary for Machine Learning models

Source: ABC of Data Science and ML

Is it possible to use linear regression for forecasting on non-stationary data (time series)? If yes, then how can we do that? If no, then why not?

 

Linear regression is a machine learning algorithm that can be used to predict future values based on past data points. It is typically used on stationary data, which means that the statistical properties of the data do not change over time. However, it is possible to use linear regression on non-stationary data, with some modifications. The first step is to stationarize the data, which can be done by detrending or differencing the data. Once the data is stationarized, linear regression can be used as usual. However, it is important to keep in mind that the predictions may not be as accurate as they would be if the data were stationary.

Linear regression is a machine learning algorithm that is often used for forecasting. However, it is important to note that linear regression can only be used on stationary data. This means that the data must be free of trend and seasonality. If the data is not stationary, then the forecast will be inaccurate. There are various ways to stationarize data, such as differencing or using a moving average. Once the data is stationarized, linear regression can be used to generate forecasts. However, if the data is non-stationary, then another machine learning algorithm, such as an ARIMA model, should be used instead.

Top 75 Data Science Youtube channel

1- Alex The Analyst
2- Tina Huang
3- Abhishek Thakur
4- Michael Galarnyk
5- How to Get an Analytics Job
6- Ken Jee
7- Data Professor
8- Nicholas Renotte
9- KNN Clips
10- Ternary Data: Data Engineering Consulting
11- AI Basics with Mike
12- Matt Brattin
13- Chronic Coder
14- Intersnacktional
15- Jenny Tumay
16- Coding Professor
17- DataTalksClub
18- Ken’s Nearest Neighbors Podcast
19- Karolina Sowinska
20- Lander Analytics
21- Lights OnData
22- CodeEmporium
23- Andreas Mueller
24- Nate at StrataScratch
25- Kaggle
26- Data Interview Pro
27- Jordan Harrod
28- Leo Isikdogan
29- Jacob Amaral
30- Bukola
31- AndrewMoMoney
32- Andreas Kretz
33- Python Programmer
34- Machine Learning with Phil
35- Art of Visualization
36- Machine Learning University
Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses
 

Data Science and Data Analytics Breaking News – Top Stories

  • LLMs: Why does in-context learning work? What exactly is happening from a technical perspective?
    by /u/synthphreak (Data Science) on April 26, 2024 at 11:03 am

    Everywhere I look for the answer to this question, the responses do little more than anthropomorphize the model. They invariably make claims like: Without examples, the model must infer context and rely on its knowledge to deduce what is expected. This could lead to misunderstandings. One-shot prompting reduces this cognitive load by offering a specific example, helping to anchor the model's interpretation and focus on a narrower task with clearer expectations. The example serves as a reference or hint for the model, helping it understand the type of response you are seeking and triggering memories of similar instances during training. Providing an example allows the model to identify a pattern or structure to replicate. It establishes a cue for the model to align with, reducing the guesswork inherent in zero-shot scenarios. These are real excerpts, btw. But these models don’t “understand” anything. They don’t “deduce”, or “interpret”, or “focus”, or “remember training”, or “make guesses”, or have literal “cognitive load”. They are just statistical token generators. Therefore pop-sci explanations like these are kind of meaningless when seeking a concrete understanding of the exact mechanism by which in-context learning improves accuracy. Can someone offer an explanation that explains things in terms of the actual model architecture/mechanisms and how the provision of additional context leads to better output? I can “talk the talk”, so spare no technical detail please. I could make an educated guess - Including examples in the input which use tokens that approximate the kind of output you want leads the attention mechanism and final dense layer to weight more highly tokens which are similar in some way to these examples, increasing the odds that these desired tokens will be sampled at each generation step; like fundamentally I’d guess a similarity/distance thing, where explicitly exemplifying the output I want increases the odds that the output get will be similar to it - but I’d prefer to hear it from someone else with deep knowledge of these models and mechanisms. submitted by /u/synthphreak [link] [comments]

  • Rural house holds in India which have got access to tapped water for the first time between Aug-2019 & Now.
    by /u/RealityCheck18 (DataIsBeautiful) on April 26, 2024 at 6:47 am

    submitted by /u/RealityCheck18 [link] [comments]

  • Washington BEAD Broadband Map
    by /u/Floaty_Nairs (DataIsBeautiful) on April 26, 2024 at 5:27 am

    submitted by /u/Floaty_Nairs [link] [comments]

  • Data Engineering Role
    by /u/Medium_Alternative50 (Data Science) on April 26, 2024 at 4:58 am

    Im new to this data field and I have a very basic question. Is data analyst experience any helpful for data engineering roles? I have some experience with data analysis, only with my personal projects tho. Im not much aware what exactly the task of a data engineer is but I have a little understanding that it has to do something with storing data. Im interested to study about it. submitted by /u/Medium_Alternative50 [link] [comments]

  • Come on Jeffrey, you can do it, pave the way, put your back into it.... nope f*ck you, no more! [OC]
    by /u/Confident-Lake8986 (DataIsBeautiful) on April 26, 2024 at 4:54 am

    submitted by /u/Confident-Lake8986 [link] [comments]

  • Tips for storytelling in data science presentations?
    by /u/ActiveBummer (Data Science) on April 26, 2024 at 2:41 am

    Any advice is greatly appreciated 🙂 submitted by /u/ActiveBummer [link] [comments]

  • Master of Data Science
    by /u/LifeIsAJoke7 (Data Science) on April 25, 2024 at 11:52 pm

    Hello everyone! I am a business analytics graduate soon, and I want to expand on my skills in data science with an online masters from University of Pittsburgh. I want to fast track my career in the best way possible. The course names are listed in the image in case you cant find it in the link. I have done a lot of research on masters programs and so far, this is the best I have gotten so far in terms of my chance at being admitted with my GPA and major. So my question/advice seeking is, whether anyone knows of good programs a person with my profile can get into. Also, Does the fact that it’s called “Master of Science” instead of “Masters of Science in Data Science” matter? Profile: Major: Information Systems and Business Analytics Minor: Data Science GPA: 3.0 Thank you! submitted by /u/LifeIsAJoke7 [link] [comments]

  • Gooogle Colab Schedule
    by /u/Uncle_Cheeto (Data Science) on April 25, 2024 at 11:19 pm

    Has anyone successfully been able to schedule a Google Colab Python notebook to run on its own? I know Databricks has that functionality…. Just stumped with Colab. YouTube has yet to be helpful. submitted by /u/Uncle_Cheeto [link] [comments]

  • [OC] 2022 to 2023 Wage Growth By Sub-Industry in US Mining and Manufacturing
    by /u/jakesmithruleZ (DataIsBeautiful) on April 25, 2024 at 8:32 pm

    submitted by /u/jakesmithruleZ [link] [comments]

  • [OC] Meta’s latest profit sources visualized
    by /u/sankeyart (DataIsBeautiful) on April 25, 2024 at 6:33 pm

    submitted by /u/sankeyart [link] [comments]

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses
AWS Data analytics DAS-C01 Exam Prep
AWS Data analytics DAS-C01 Exam Prep

AWS Data analytics DAS-C01 on iOS pro

AWS Data analytics DAS-C01 on Android

AWS Data analytics DAS-C01 on Microsoft Windows 10/11:   

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Machine Learning Engineer Interview Questions and Answers

What are some good datasets for Data Science and Machine Learning?

O(n) Rotational Cipher in Python

O(n) Rotational Cipher in Python

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

O(n) Rotational Cipher in Python

Rotational Cipher: One simple way to encrypt a string is to “rotate” every alphanumeric character by a certain amount. Rotating a character means replacing it with another character that is a certain number of steps away in normal alphabetic or numerical order. For example, if the string “Zebra-493?” is rotated 3 places, the resulting string is “Cheud-726?”. Every alphabetic character is replaced with the character 3 letters higher (wrapping around from Z to A), and every numeric character replaced with the character 3 digits higher (wrapping around from 9 to 0). Note that the non-alphanumeric characters remain unchanged. Given a string and a rotation factor, return an encrypted string.

Signature

string rotationalCipher(string input, int rotationFactor)

Input

1 <= |input| <= 1,000,000 0 <= rotationFactor <= 1,000,000

Output

Return the result of rotating input a number of times equal to rotationFactor.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

Example 1

input = Zebra-493?

rotationFactor = 3

output = Cheud-726?


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Example 2

input = abcdefghijklmNOPQRSTUVWXYZ0123456789

rotationFactor = 39

output = nopqrstuvwxyzABCDEFGHIJKLM9012345678

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

O(n) Solution in Python:

O(n) Rotational Cipher in Python

Test 1:

PS C:\dev\scripts> .\test_rotational_cipher_python.py
Length Dic Numbers : 10
Length Dic lowercase : 26
Length Dic uppercase : 26
Input: Zebra-493?
Output: Cheud-726?

Test2:

PS C:\dev\scripts> .\test_rotational_cipher_python.py
Length Dic Numbers : 10
Length Dic lowercase : 26
Length Dic uppercase : 26
Input: abcdefghijklmNOPQRSTUVWXYZ0123456789
Output: nopqrstuvwxyzABCDEFGHIJKLM9012345678

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

Reddit Sports Sports News and Highlights from the NFL, NBA, NHL, MLB, MLS, and leagues around the world.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!