What are some coding anti-patterns that can easily slip through code reviews?

What is the single most influential book every Programmers should read

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What are some coding anti-patterns that can easily slip through code reviews?

Programmers are a notoriously irritable bunch. We’re constantly getting in arguments with each other about the best way to do things. This is largely because there is no one “right” way to code – it’s more of an art than a science. However, there are some coding practices that are universally agreed to be bad form. These are known as “coding anti-patterns,” and they can easily slip through code reviews if you’re not careful.

One common coding anti-pattern is “spaghetti code.” This is code that is so tangled and convoluted that it’s impossible to follow. It’s the software equivalent of a bowl of spaghetti – a jumbled mess that you can’t make heads or tails of. Spaghetti code can be very difficult to debug and maintain, so it’s best to avoid it if at all possible.

Another coding anti-pattern is “copy-and-paste programming.” This is when a programmer takes some existing code, copies it, and then modifies it slightly to suit their needs. This might seem like a quick and easy way to get the job done, but it often leads to duplicated code that is hard to keep track of. It also makes it more difficult to make global changes, since you have to remember to change every instance of the duplicated code. Copy-and-paste programming might be tempting, but it’s usually a bad idea in the long run.

These are just a few of the many coding anti-patterns that can easily slip through code reviews. So next time you’re doing a review, keep an eye out for them – and try not to let them slipping through!

What are some coding anti-patterns that can easily slip through code reviews?
What are the top 10 most insane myths about computer programmers?

Below is an aggregated list of some coding anti-patterns that can easily slip through code reviews.

  • Comments: We all want to write meaningful comments to explain our code, but what if someone writes 4 paragraphs of comments explaining exactly what a piece of code does? This will have no problem passing through the code review, but it creates frustration for developers who need to maintain the code because every time I need to change a piece of the code, well I have to go through the 4 paragraphs and maybe rewrite the whole thing, so screw it, I’m not touching that code.
  • SRP: We want out code to respect the Single Responsibility Principle, we want developers to write small unit of logic that can be easily testable, but what happens when you write too many units? This will have no problem passing the code review, and if someone asks, you can just tell them you wrote the code to be easily testable but then once you go over a certain threshold, it becomes frustrating to jump between 20 methods, in 10 classes just to do a simple task. It become the real spaghetti code.

    SRP is a principle, not a pattern. From my experience, DRY should guide one to OCP and OCP to SRP.

    The acronyms are explained here SOLID principles (plus DRY, YAGNI, KISS and other YAA)

  • Indifferent Architecture: You like a framework, so you use it for you next project and you don’t think much about it. You put all the Controllers in the Controllers folder, all the Services in the Service folder, all the Helpers in the Helpers folder and because frameworks (Rails, Laravel..) operates with a certain level of magic, the simple act of putting your Model in the Models folder will give you a certain level of assistance that you will love… This will have no problem slipping through the code review because guess what, you’re following the framework’s guidelines, but fast forward a few months and you end up with this monolith that we all like to hate and then your developers start hating on monoliths and want to go micro services… The real issue is not the monolith, the real issue was the lack of design and architecture.

The biggest anti-pattern that will slip through code reviews very easily is the singleton pattern. It is an anti-pattern for two reasons:

  1. What is unique today may be duplicate tomorrow: the classic case here is that 20 years ago we used to have one screen per workstation, today two or even three and four screens are increasingly common. This means that if your development environment uses a singleton for the screen now you are in trouble!
  2. Even if you really have just one (say, a configuration file), the implementations flying around are absolutely horrific 99.99% of the time

Right, so, why is the mainstream implementation horrific? Here is what people will generally do: because the pattern says that there must be only one instance of a class, they will hide the constructor and instead have a static method called “getInstance” or something similar to create the class and reuse it across the board.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

That is the wrong way to go about it. What you should be doing is this instead:

  1. Make the entire singleton class private
  2. Have a normally allocatable class made public
  3. In the public class’ implementation (which has to reside in the same file) create the private class as required (maybe as a static field! That is completely fine)
  4. Use the public class

This is how you should do a singleton, but that is not what you see around. The net result of the common implementation is a hidden dependency on the singleton, which then means a lot of stuff cannot be tested properly without bringing the singleton in (so you can’t, for example, easily mock it).

Please stop doing singletons or, if you can’t, please do get them right.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Code reviews are really important. However, without a good set of coding standards, they can often become “this is my preference”.

Here’s my suggestion on how to avoid anti-patterns slipping through code reviews:

  • Read through Martin Fowler’s book “Refactoring”.
  • As a team, figure out what people think are anti-patterns.
  • Agree on a list. Define these anti-patterns in your coding standards.
  • Make sure everyone reads the coding standards, and can access it easily.
  • Then, you have given one another permission to call each other out when that class gets too large, or the method gets too long, or the method has too many parameters.

“Early exit” — the coolest and simplest thing.

Coolest Coding Pattern
Coolest Coding Pattern
 
 

The idea is to exit the code block as soon as you can. A few bonuses arise from this pattern:

  1. Your code is likely more focused on the purpose of the block. Better at avoiding a kind of “run-on sentence” type of programming.
  2. Reduced nesting. The same exact code can be written where the complicated code is within a nested bracket given a condition, but this helps keep your more complicated code at the tail end instead of nested near the top of a function.
  3. Helpful to reinforce the fact that validation and parameter checking should be done first. You get used to it and functions start to look weird if they don’t validate input parameters.
  4. Much easier for others to debug your code. Most of the validation is near the top. Less mental brainpower needed because the code is a bit more readable.

Personally, I really like how it makes my code look like block paragraphs. It makes it easy to skim and read quickly.

From a distance you can see how it forms blocky paragraphs.

SmartBear Software company published a small white-paper with 11 good practices for an effective code review process:

  1. Review fewer than 200-400 lines of code (LOC) at a time: More then 400 LOC will demand more time, and will demoralise the reviewer who will know before hand that this task will take him an enormous amount of time.
  2. Aim for an inspection rate of less than 300-500 LOC/hour: It is preferable to review less LOC but to look for situations such as bugs, possible security holes, possible optimisation failures and even possible design or architecture flaws.
  3. Take enough time for a proper, slow review, but not more than 60-90 minutes: As it is a task that requires attention to detail, the ability to concentrate will drastically decrease the longer it takes the task to complete. From personal experience, after 60 minutes of effective code review, or you take a break (go for a coffee, get up from the chair and do some stretching, read an article, etc.), or you start being complacent with the code on sensitive matters such as security issues, optimisation, and scalability.
  4. Authors should annotate source code before the review begins: It is important for the author to inform colleagues which files should be reviewed, preventing previously reviewed code from being validated again.
  5. Establish quantifiable goals for code review and capture metrics so you can improve your processes: it is important that the management team has a way of quantifying whether the code review process is effective, such as accounting for the number of bugs reported by the client.
  6. Checklists substantially improve results for both authors and reviewer: What to review? Without a list, each engineer can search for something in particular and leave forgotten other important points.
  7. Verify that defects are actually fixed! It isn’t enough for a reviewer to indicate where the faults are or to suggest improvements. And it’s not a matter of trusting colleagues. It’s important to validate that, in fact, the changes where well implemented.
  8. Managers must foster a good code review culture in which finding defects is viewedpositively. It is necessary to avoid the culture of “why you didn’t write it well in the first time?”. It’s important that zero bugs are found in production. The development and revision stage is where they are to be found. It is important to have room for an engineer to make a mistake. Only then can you learn something new.
  9. Beware the “Big Brother” effect: Similar to point 8, but from the engineer’s perspective. It is important to be aware that the suggestions or bugs reported in code reviews are quantifiable. This data should serve the managers to see if the process is working or if an engineer is in particular difficulty. But should never be used for performance evaluations.
  10. The Ego Effect: Do at least some code review, even if you don’t have time to review it all: Knowing that our code will be peer reviewed alerts us to be more cautious in what we write.
  11. Lightweight-style code reviews are efficient, practical, and effective at finding bugs: It’s not necessary to enter in the procedure described by IBM 30 years ago, where 5-10 people would close themselves for periodic meetings with code impressions and scribble each line of code. Using tools like Git, you can participate in the code review process, write and associate comments with specific lines, discuss solutions through asynchronous messages with the author, etc.

Source: Quora

This is a bit longer answer to the question – tool recommendations are in the end.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

First some background. I’ve written Master’s thesis about conducting efficient code reviews in small software companies, which was partly based on a case study which I made with our own projects in small (10 employees) software company producing apps for Mac and iOS.

During the last 6-7 years I’ve evaluated various code review tools, including:

  • Atlassian Crucible (SVN, CVS and Perforce)
  • Google Gerrit (for Git)
  • Facebook Phabricator Differential (Git, Hg, SVN)
  • SmartBear Code Collaborator (supports pretty much anything)
  • Bitbucket code comments
  • Github code comments

At some point I’ve also just manually reviewed patches which were e-mailed after each commit/push.

I’ve tried many variations of the code review process:

  • pre-commit vs. post-commit
  • collecting various metrics & continuously trying to optimize the process vs. keeping it as simple as possible
  • making code review required for every line vs. letting developers to decide what to review
  • using checklists vs. relying on developers’ experience-based intuition

Based on my experience with the code review process itself and the tools mentioned above, within the context of a small software company, I would make the following three points about code reviews:

 

  1. Code reviews are very useful and should be conducted even in software which may not be very “mission critical”. The list of benefits is too long to discuss here in detail, but short version: supplementing testing/QA by ensuring quality and reducing rework, sharing knowledge about code, architecture and best practices, ensuring consistency, increasing “bus count”. It’s well worth the price of 10-20% of each developer’s time.
  2. Code reviews shouldn’t require use of a complex tool (some of which require maintenance by their own) or a time-consuming process. Preferably, no external tool at all.
  3. Code reviews should be natural part of development process of each and every feature.

Based on those points, I would recommend the following process & tools:

  1. Use Bitbucket or Github for your source control
  2. Use hgflow/gitflow (or similar) process for your product development
  3. The author creates Pull Request for a feature branch when it’s ready for review. The author describes the Pull Request to the reviewer either in PR comments (with prose, diagrams etc) or directly face-to-face.
  4. The reviewer reviews the Pull Request in Bitbucket/Github. A discussion can be had as Github/Bitbucket comments on PR level, on code level, face-to-face or combining all of those.
  5. When the review is done, feature branch is merged in.
  6. Every feature goes through the same process

So, my recommended tools are the same you should be using for your source code control:

  • Bitbucket Pull Requests
  • Github Pull Requests
  • Atlassian Stash Pull Requests (if you need to keep the code in-house)
  • Unit tests are above the minimum threshold
  • Consistent naming convention with rest of codebase
  • No duplication of functionality
  • Properly linted/formatted code

Code Review Checklist :

  1. Logic : Whether your logic is correct according to the use cases?
  2. Performance : Check if there is a better approach/algorithm to solve the use case?
  3. Testing : Whether unit tests [3]have been written? Do they cover all the scenarios and edge cases? Whether manual feature tests/ integration tests[4] have been performed? ( I usually omit the integration tests to be written at the time of code-review, I think it’s quite early. I am fine if the changes have been tested in a local stack )
  4. SOR : I call this separation of responsibility. Is there necessary control abstraction[5] in your low level design? How modular is your codebase? Is there a DAO layer before the database? If there is a client layer? Is there a manager layer? How have you handled exceptions? Who is taking care of logging? How generic can their methods be? What kind of methods should they expose and what responsibility should they own at each level? Probably, this is the best place to inject your knowledge of Design Patterns[6]. Also, this component decides how generic[7], scalable[8] and extensible[9] your system can be.
  5. Readability : Short and descriptive variable/method names. Strong use of standard verbiage without any grammatical mistakes. Method size kept small. Proper naming convention throughout the package be it camel case[10] or snake case[11]. Consistent naming of variables. Do not refer the same entity differently at different places in your code, avoid unnecessary confusion. Define scope[12] of every class/method/variable and make judgements of adding a new class/method thinking of who is going to use it? and who is not going to use it?
  6. Automation : If there are few lines of code being written at multiple places, move them to a method/utility. Avoid redundancy. Make the best use of reusability[13].
  7. Documentation : Draft the HLD/LLD over a wiki or a document. The key design decisions, the Proof-of-concepts[14], the reviews/suggestions by senior developers should always be consolidated at one single place. Although this point is not relevant for all the code-reviews but for the key implementation reviews, this serves as a recipe for the reviewer. Apart from these high level docs, make sure that you have javadocs/scaladocs[15] for all the public methods. Avoid comments as much as possible, make your code self explanatory.
  8. Best Practices : Read the manuals/ articles/ research papers. ( very few scenarios ) of the frameworks consumed. Be an ardent visitor of Stack Overflow[16] and check for the best ways to implement a certain complex usecase and how the code abides by it.

Footnotes

 
 

I spend quite a bit of time reviewing code and some of the common problems I found are :-

  1. Over architecture by creating lot of superficial interfaces
  2. Premature optimization of code
  3. Reinventing the wheel when something like this exists in open source or inside the codebase already.
  4. coming up with a totally new pattern for doing things when such problem is already solved in code.
  5. Trying hard to fit a design pattern into a code where its not needed (just because you read it few days back)
  6. Very long variable names
  7. Typos in variable names
  8. No comments(I am ok with this if code is written like a book but sometimes you are writing something complex like an algorithm that wont make sense to someone newbie and leaving a one liner comment about your decision process would help people why you are doing it).
  9. Lack of enough tests in new code.
  10. No tests or borderline tests when mutating legacy code. Also no effort to make legacy code better.
  11. Wrong technology choice
  12. Introducing SPOF in architecture
  13. Typical database schema issues
    1. Missing indexes
    2. Typos, using java conventions for db field names or mismatched conventions with existing field names
    3. very long column names
    4. Wrong datatype like strings for date or varchar(1) for boolean
    5. Too bigger or too limited field lengths
Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Since you’re looking to review your whole project, Stack Overflow , the Code Review Stack Exchange, and programming subreddits won’t work.

Here are some options that will help a non-technical person such as yourself:

Freelancers and Agencies

Consider hiring a more experienced freelancer or agency to review your outsourced team’s code. You might even be able to hire a local software developer to review their work.

  • UpWorkFreelancerFiverrToptalCodementor, etc. – With rates for code review as cheap as $10/hour, there’s a range of quality.
  • Development Agencies – There are thousands of software development agencies around the world that offer code review. Similar to hiring freelancers, they start at around $10/hour. See this Quora question for tips for choosing a software development company. Be sure to read through the checklist for vetting and hiring them.

On-demand Code Review

If you want a professional option then look at PullRequest.com. It’s a platform for on-demand code review that works with GitHub, Bitbucket, or GitLab to provide code quality feedback from vetted reviewers. They can review your project for bugs, security issues, code maintainability, and code quality issues.

What are the Greenest or Least Environmentally Friendly Programming Languages?

What are popular hobbies among Software Engineers?

Machine Learning Engineer Interview Questions and Answers

 

Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada

Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada.

Looking to save a few cents per litre on gas in the USA or Canada? Here are a few tips and tricks that can help you do just that.

First, make sure you’re using the gas rewards program at your local gas station. By using a gas rewards card, you can earn points that can be redeemed for discounts at the pump. Additionally, many gas stations offer coupons and promotions that can save you money on gas purchases. Be sure to check the gas station’s website or app for any current offers.

Second, consider carpooling or taking public transportation when possible. This will help you save on gas costs and may even improve your fuel economy. If you must drive, try to consolidate your errands into one trip instead of making multiple trips. This will also help you save on gas.

Finally, keep your car well-maintained. A well-tuned engine can improve your fuel economy by up to 4%. Additionally, properly inflated tires can also improve your fuel economy by up to 3%. By following these simple tips, you can easily save up to 30 cents per litre on gas in the USA and Canada.

Gas is getting very expensive and we are trying to help consumers save on Gas by providing you daily tricks to help you save up to 30 cents per litre on Gas in USA and Canada.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

TOP 1000 CANADA QUIZ CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY
TOP 1000 CANADA QUIZ
CANADA CITIZENSHIP TEST- HISTORY – GEOGRAPHY

Tricks to save up to 30 cents per litre on Gas in USA and Canada

1- Go shop for Food at Safeway and get an automatic 15 cents per litre discount at Safeway Fueling stations

2- To get 30 cents discount at Safeway Fuel stations, use the code below based on Epoch:

[Day]-800-[random 5digits]


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Example:  Safeway 16 to 30 cents cents off gas code

  • For July 16 2022, so the  Epoch Day is:  197
  • A random 5 digits  (Change the 5 digits if it doesn’t work. )
  • So a Coupon to save 30 cents per litre at Safeway Gas Station on July 16, 2022 is:   
  • 197-800-263944
  • (Remember to change the random 5 digits until it works)

3. Purchase Discount Gift Cards for Gas

Rewards card – Cashback

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

You can discover a great deal of rebate gift vouchers for gas on the web. These will work all things considered Shell, Gulf, and Mobil stations. They will spare a couple of dollars for each buy, yet that can add up to enormous reserve funds on a yearly premise.

The Optimum program is one of the better value points programs. And the points convert to cash discounts on stuff you buy every day, rather than air travel and catalogues full of slightly aged-out consumer trinkets that you don’t really need.

PC Optimum savings on gas
PC Optimum savings on gas

If you are a Costco member and also optimum member, which option gives you the most savings?

 From a quick google of prices in my area it looks like the average price is around $2/L and Costco is currently around $1.75. The value of the Optimum program is more that you can keep your eye out for specials and earn points which can then be put toward gas purchases. But the basic earnings of 10 pts/litre (1¢ equivalent) and redeem up to 4,000 pts ($4 equivalent) aren’t anywhere near 25¢/litre. If you don’t mind the lines 😉

If you have one near, try to fuel up at Mobil gas instead of Esso. Esso provides 15 points per liter, Mobil gas provides 35 points per liter.

I used to have a work vehicle that I filled with Mobil gas, on the company credit card, got approx. 30 dollars of free groceries from Loblaws every week because of this practice.

Which card gives 10% cash back at the moment?

TD , CIBC and Scotia all have one right now. It’s 10% cashback on purchases up to $2000 in the first three months.

I use CIBC Dividend card not only do I save on gas (.03 off a litre till you get 300l then .10 off one time and then it resets) but earn Cashback everywhere. Last yr I earned about 580 Cashback this yr I’m over 200 right now.

I bank with CIBC as I use my card I pay it off same day so never paid interest.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Note that your max yearly cash back for the 4% (gas and groceries), 2% and 1.5% categories is $800 (4% of $20,000). After $20,000 yearly spend, the 4% cash back ends, and is replaced with 0.5% on all purchases. In other words, if you spend on any of the other categories, you won’t get the $800, because you’ll hit $20,000 total spend before you hit $20,000 on gas and groceries.

I got a Rogers World Elite card, and use it for all purchases except gas and groceries, for 1.5% cash back. I use the cibc dividend card only for gas and groceries for 4% cash back.

CAA members save 3 cents per L at all shell stations. And they use air miles.

4. Drive Sensibly

Quick quickening and short explosions of speed can cost you a ton with regards to gas. Slow and reliable movement is constantly favored over aimless driving. Land Rovers, for example, can show signs of improvement mileage utilizing journey control. Practice smooth driving and you’ll certainly set aside some cash with improved gas mileage.

5. Time Your Trips to the Gas Station

Gas costs can ascend on Thursdays because of high odds of end of the week travel. To keep away from these expanded costs, top off the tank before Thursday or on significant occasions.

6. Utilize Your Smartphone to Find the Cheapest Gas Station

Your cell phone is for something other than perusing Facebook and Instagram. Use it to locate the least expensive gas in your general vicinity. Applications like AAA Triptik and GasBuddy will assist you with finding the closest and least expensive fuel. gas

Something I’ve noticed with the gas saving apps… many times the prices are wrong. I show up at a station, and end up refueling anyway, and then a few minutes later I see it has been put back to the “fake low price”.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

I think owners are gaming the system in order to draw people in.

7. Get a Gas Rewards Card

Too few have a gas rewards card. It resembles not getting a prizes plan regardless of whether you’re a long standing customer. There are a great deal of sites out there that can acquaint you with bargains for fuel rewards. You can get free gas on the off chance that you gather enough focuses, so why not? Pursue that prizes card!

8. Try not to Leave Your Engine Idling for Very Long

Close off your motor in case you’re not going anyplace. You’re squandering gas, and you’re dirtying nature.

9. Deliberately Use Cards or Cash

money or credit

A few service stations charge a premium on the off chance that you pay with Visas, however some give you limits on them. Discover and use what you can to set aside cash.

10. Keep up Your Car

Keeping your vehicle kept up is the manner by which to get a good deal on gas over the long haul. In the event that you have a clunker or a vehicle that you treat severely, it will have awful mileage. Simply keeping your tires expanded can improve your gas mileage by 3.3%. So focus on your support.

11. Be Picky

Corner store

Quit heading off to the corner store near your home or the interstate so you can get it over with. This can cost you almost 15 pennies more for every gallon. Discover a corner store that has modest costs and stick with it.

11. Try not to Overload Your Car

over-burden vehicle

This is an easy decision, however it needs strengthening. In case you’re hauling around as long as you can remember in your vehicle, quit doing it. Clearly the heavier your vehicle gets the more gas it will require to cover a similar separation. Just keep the minimum necessities in your vehicle. Leave the rest at home.

This application gets you 40/cents per gallon money back at several gas stations. Average individuals are getting paid hundreds, and expert drivers are getting thousands with this application that gets you 40cents money back on each gallon of gas!”

12. Drive more slowly and think ahead and use motor braking.

The amount of time you win for speeding is so little compared to the amount of fuel you are going to save.

13. Plan out grocery trips for longer times. Instead of going a few times a week to pick up a couple things, go once every 2-3 weeks with a list of everything you’ll need for that timeframe.

14. Drive the smallest stick shift diesel available. Press in your clutch on downhills, especially long ones on the freeway. Play a game where you try to put as little foot on the gas.

15. Buy a more fuel efficient car. That makes the biggest difference.

16. Drive less. Combine trips. Carpool. Walk. Bicycle. Take public transit.

Do things (including many types of work) that can be done over a wire, over that wire, instead of driving to it. Drive a more fuel-efficient vehicle. If people would bother to think about when all of these might be possible, they would find that they generally are possible.

16. Limit discretionary driving. 

I have a gas-powered SUV and paid nearly $60 to fill its tank last week. I no longer drive around town just for the hell of it—I have to be strategic. Instead of driving to Target or Walmart for household goods and groceries, I order these necessities for delivery via Amazon. If I do need to drive to one part of town, I hit all the shops in that area at once and act as if I won’t be back for weeks. Ultimately, I am driving with intent—every trip has a purpose.

17. Tyres

Find the Tyre pressure placard in your car and make sure your tyres are pumped up to the correct pressure.

Try and do this when you have driven the car for less than 5 minutes. hot air expands and will give a false reading if the tyres are hot. do it when it is cold. Do NOT pump them up to the max pressure listed on the side of the tyre.

Keeping your tire pressure perfect is not only a safety measure but also helps in Saving Fuel as the right amount of tire pressure will reduce the friction with the road.

Tips- Tire pressure check is free on every petrol pump, but it does not mean it’s useless. Make Use of It every time you can.

Actually, over-inflate your tires for best gas mileage.

The number on your door is the recommended pressure. The max pressure on the tire is the “do not exceed” number. Something in between is fine.

The drawback is that you’re going to wear out the middle of the tire quicker than the sides (because it’ll dome a bit from the higher pressure if you don’t have enough weight to force it flatter again). This might be noticeable after years.

But tires aren’t that expensive, and fuel is. You’ll pay off the small reduction in tire life with the bigger reduction in fuel use (and, especially if you’re in a pinch today, you could kind of consider it a deferred expense). And, it’s a small change you can always taper off again later.

A side effect will be a slightly harsher ride, and slightly less grip (not great for the winter).

Roughly speaking, 50% of your gas usage comes from rolling resistance in the tires, the other 50% from air resistance. At city speeds, tires and starts/stops make up most of your gas cost. Around 2/3, 3/4 of highway speeds is where air resistance takes over. Above 60mph/100kmph is where you really start to gobble fuel disproportionately (10% faster uses 33% more fuel).

Avoid where you have to use the brakes. Any time you use the brakes you’re wasting all the energy you had to put into accelerating the vehicle. In stop/go traffic, this is most of your fuel use. So instead of racing forward to fill gaps and then have to stop, just drive half the speed, steadily. If you see the light is red, get off the gas and coast, don’t accelerate up to it and then hit the gas. Careful you’re not blocking turning lanes by driving slower, just because you’re stopping at the lights doesn’t mean everyone behind you is.

In short… there’s no free lunch here. If there were ways to save money on gas, those would already be things we’re doing. All the little tips and tricks might add up to 20%, which is like… where gas prices were a month ago.

The only easy way to save money on gas is to drive less.

18. Lose weight.

Get rid of any excess stuff you have in your car. Every extra kilo costs money to haul around. Same goes for aerodynamics. those roof racks you never use? take them off!

19. Change your driving style.

So many people these days drive aggressively. stamping your foot to the floor whenever you accelerate is both unnecessary and burns far more fuel than using 50 or 75% throttle. there are other throttle positions than 100%!

Instead of speeding up to close any gap in front of you. leave it there and coast a bit. someone may change lanes, who cares? watch ahead, if cars start braking ahead, take your foot off the throttle early and coast a bit instead of riding the car in front of you constantly braking and accelerating.

20. Drive smoothly. it’s amazing how big of a difference driving style makes to fuel consumption.

21. Engine Air Filter

Make sure the engine air filter is clean, dirty air filters make for poor fuel consumption.

22. Premium Fuels

Only go for premium fuels if the car company suggests you to. Otherwise, you are just increasing the cost of fuel and increasing the overall running cost of your car. Well, it’s a myth that premium fuel will help you save more fuel and increase the mileage of your car It’s False.

Tips- Buy Normal Fuel, Premium fuel burns more and adds more price and Same less Fuel.

23. Cruise Control

Using cruise control on the highway will provide a smooth ride with a little bit of constant acceleration. Ultimately it will add to your mileage and save you a lot of fuel.

24. Race Peddle Control

If you keep a soft foot on the peddle you will always Save lots of Fuel. When we use a hard foot car consumes the maximum amount of fuel that needs to generate the power we want.

Tips – After attaining a speed of 70-80 try losing your foot maintaining the race paddle at the fixed position where the acceleration is almost zero.

25. Keep RPM Low

Higher RPM means higher fuel consumption and Lower RPM helps in Saving Fuel providing a safe feeling to every passenger in the car.

Tips- Remember you can only create a very little difference in time if you drive fast keeping your speed and RPM high. But you can’t save more than 5 Min as per the traffic on the roads these days. Keep it Low to Save Fuel.

26. Save Fuel by Driving Smart

Driving consciously and safely will always help in maintaining the mileage of a car and Save Fuel. Avoiding unnecessary fast pickups and jackrabbit stops will always help in saving fuel.

Tips – Easy and Safe driving will help in Saving Fuel and driving safety.

27. Overlooked button on your car may help save on gas

The ‘Air Recirculating’ button on your A/C might cool off your car faster and save you a little gas. On most cars, trucks, and SUVs the air recirculation button is easily identifiable, with its representing symbol of a half-circle inside of the outline of a vehicle. Many people say they’re aware of the button, but are not sure when it should be on or off.

Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada
Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada

Another function of this climate control system is to stop pollution and exhaust fumes from entering the vehicle. Having this button activated will also help to greatly reduce pollen when driving, which is a big positive if you suffer from outdoor allergens.

“If you don’t switch the air recirculation button on, then your car’s air conditioning will be constantly cooling warm air from outside your vehicle, and will have to work much harder, putting more stress on the blower and air compressor,” said Ruhl.

Another benefit to using the air recirculation feature is the money you could save on gas.

“Cars are usually more fuel-efficient when the air conditioner is set to recirculate interior air. This is because keeping the same air cool takes less energy than continuously cooling hot air from outside,” said Ruhl.

While the recirculation button is great for the summer months, it may be best to avoid it in the winter or when your windows become foggy.

“Anytime you’re using defrost, it’s best to not have that button on. Also, using it while you have your heater on isn’t going to do anything for you vehicle,” said Ruhl.

Source.

28. Your driving habits are a huge factor. Very slow accelerations and decelerations help dramatically. Coasting to that upcoming red light instead of keeping on the gas and braking. Chilling at 60 on cruise in the right lane vs accelerating between 65 and 75 passing people in the left. Things like that.

Also for most cars, above 55 its better to keep your windows up and use ac, below 55 better to do windows down and ac off. Varys by model due to aerodynamics, but 55 is good enough to give you an idea.

29. Don’t hard accelerate

Try to slow down in a more gentle manner if your lucky the light will go green before you stop

Be consistent with your speed if it’s 30 mph zone try not to go faster than that or get distracted to the point where your car starts slowing down

If it’s hot out keep the windows down, AC in older cars can make the car consume more gas, not sure how these newer cars are doing with that.

Make sure your tires have good tread, bald tires can spin out more and if the wear is uneven that can cause additional issues.

30. If you drive a SUV trade it for a Toyota Corolla

Scientifically proven that the wavelength of reflections on the beige tone is in the optimal bandwidth to reduce optical resistance, thus better fuel efficiency.

Check your engine air filter. Make sure it is clean, replace if necessary. Make sure your tires are filled to the recommended pressure.

Also change spark plugs at their recommended service life.

Also, if you car is over 160k km, good idea to replace the O2 sensors as they get slow. Replaced all four sensors in my car and my mileage went from 9.x L/100 km to the high 7’s.

What kind of car should you buy that saves on gas?

A Prius, or any type of gas/electric hybrid, or a smaller vehicle, like a Toyota Corolla, Honda Civic, Chevy Malibu, Ford Focus, VW GTI or Rabbit.

But there is a direct correlation between How you drive, regardless of What you drive. I have a 1998 Chevy Silverado, with a 5.7L (350 cu in) V8, and I can get great MPG’s when I drive it sensibly, and don’t have a ton of unnecessary stuff/gear in the back, or even back seat.

Make sure the tires are set to the appropriate PSI. Always set them to the pressure setting on the inside of the drivers door. On that subject, changing the tire size or wheel size and sidewall thickness will also have a negative effect on MPG.

You would be surprised how much stuff a lot of people have laying in the back of their car, and if they would simply clean it out, they could save money.

Also, keeping your vehicle tuned up and the oil changed per the owners manual will also help keep the MPG high.

Not speeding away from every stop sign or stop light will also help.

 

Keeping your speed down on the freeway will help.

However, opting to roll the windows down instead of using the A/C to keep cool will actually create drag on the car and lower the efficiency. So crank the heat sucker up to high. Not only with rolling the windows up save fuel, it will also reduce noise and reduce fatigue, so you can drive more comfortably.

What burns more gas, accelerating as fast as possible to 60 mph (e.g. 10 seconds) or accelerating slowly (e.g. 30 seconds)?

Not long ago I had a ’16 Subaru WRX. Fast, turbo-charged all-wheel-drive car. Terrible gas mileage. It’s also heavy, roughly two tons.

One day, I did an experiment on the city streets. Rather than accelerate in a controlled manner and drive at a consistent pace, I put the gas pedal all the way down to reach about 15 mph over the speed limit, and then I put the car in neutral, and let it coast. The car would coast a full mile before it was going slow enough (5 to 10 mph below the speed limit) that I had to put it in gear and goose the throttle again full blast and bring it up to 15 mph over the speed limit.

In this simple test, the overall gas mileage skyrocketed. It went from about 25 mpg to more like 40 mpg. And yet I was ultimately going the speed limit on average, and kicking off my trips very quickly.

This led me to a realization. Yes, holding that gas pedal all the way down uses up a lot of gas. But what it also does is important: it brings you up to speed. What also uses up a lot of gas is simply cruising—not coasting, cruising. That’s where most of your gas is being spent, because your engine is expending gas, quite a bit of it, actually, just to keep up and maintain velocity.

And when you accelerate slowly, you’re effectively cruising, without being up to speed, yet with a little extra gas. That’s wasteful, because you’re going slow and still using up plenty of gas. Is it more wasteful than the explosion of rushing your car forward immediately? Actually, perhaps so, if you’re taking too long to do it.

Remember, just turning that engine using fuel uses up fuel. Accelerating quickly brings the car up to speed quickly—which brings the engine’s productivity to the maximum output quickly—which is not an infinite dump of fuel, it is limited to what the fuel line and injector and cylinder can mix with air and compress, which is measurable, and it’s actually not as far off from cruising fuel as people seem to think. Source: Quora

 TIPS ON PUMPING GAS THAT WILL SAVE YOU $$$

1️⃣ Only buy or fill up your car or truck in the early morning when the ground temperature is still cold. Remember that all service stations have their storage tanks buried below ground. The colder the ground the more dense the gasoline, when it gets warmer gasoline expands, so buying in the afternoon or in the evening….your gallon is not exactly a gallon. In the petroleum business, the specific gravity and the temperature of the gasoline, diesel and jet fuel, ethanol and other petroleum products plays an important role.

2️⃣ A 1-degree rise in temperature is a big deal for this business. But the service stations do not have temperature compensation at the pumps.

3️⃣ When you’re filling up do not squeeze the trigger of the nozzle to a fast mode If you look you will see that the trigger has three (3) stages: low, middle, and high. You should be pumping on low mode, thereby minimizing the vapors that are created while you are pumping. All hoses at the pump have a vapor return. If you are pumping on the fast rate, some of the liquid that goes to your tank becomes vapor. Those vapors are being sucked up and back into the underground storage tank so you’re getting less worth for your money.

4️⃣ One of the most important tips is to fill up when your gas tank is HALF FULL. The reason for this is the more gas you have in your tank the less air occupying its empty space. Gasoline evaporates faster than you can imagine. Gasoline storage tanks have an internal floating roof. This roof serves as zero clearance between the gas and the atmosphere, so it minimizes the evaporation. Unlike service stations, here where I work, every truck that we load is temperature compensated so that every gallon is actually the exact amount.

5️⃣ Another reminder, if there is a gasoline truck pumping into the storage tanks when you stop to buy gas, DO NOT fill up; most likely the gasoline is being stirred up as the gas is being delivered, and you might pick up some of the dirt that normally settles on the bottom.

6️⃣ Note: If the pump repeatedly shuts off early, it could be a sign of a problem with the vapor recovery system, such as a clogged carbon canister.”

How can You save gas when driving long distances?

1. First and foremost Maintain a steady speed.
2. Fill your tire pressure 1 or 2 psi more than the prescribed number.
3. Do not travel with your AC off, especially during long distance journey. With your AC off you will have to lower the car windows and if you are traveling at speed more than 60 miles per hour it is going to affect the aerodynamics of the car and this might affect the fuel consumption a bit.
4. Remove all unnecessary weight from the car.
5. Choose a well maintained road even if it is going to take you more time than a bad road.
6. Have your car checked with a mechanic before you travel.

Do automobiles get better fuel mileage with the A.C. on and windows up, or A.C. off, and windows down?

Under 70mph and your windows up, your AC will use more energy than if the windows were down and the AC off. As your cruising speed increases, the aerodynamic drag on the car increases to the point where having the windows down creates a greater load on the engine than the AC does. This only applies to modern cars which are generally quite aerodynamic. Having the windows up or down doesn’t really make any difference to vintage cars. Remember though, AC takes more power than you might suppose so on a long hot journey, driving with the AC off will improve mpg. Taking the AC equipment off altogether will make an even bigger difference – as much as 10%.

 
 

Does cruising in a car save on gas? How?

 

Since cruising involves maintaining the vehicle at a constant velocity, it requires minimum efforts (Power) from the engine.
The power required from the engine is used to nullify the declaration from frictional forces (air drag and road adhesion). Since less power is required from engine the ECU ensures minimum gas is used.

Can lowering your tailgate really save on gas?

No it’s a myth…in fact the now cancelled show MythBuster’s did an episode on it. Pretty legit test if I do say so, although if you have a truck with two gas tanks you could test it yourself as I have. The one thing that can help seems counterintuitive, which is add a little weight. Like around 100 pounds or so depending, and make sure it’s over or behind the rear axle in the bed. What this does is give the rear wheels a bit more traction and that increases your gass mileage a little. A trick I learned from my Grandpa as a curious little kid wondering why he always had a couple spares mounted to each side of the bed right up against the tailgate. Those old gas guzzlers need all the efficiency they could get.

Bonus: also works better in snow, ice, and slush…get some sand bags and throw them in the same spot behind the axle and you limit fishtailing/sliding in the winter. More weight than the hundred pounds, plus it has multiple uses. If you get stuck where the tires are spinning on the ice you can open up a sand bag and out the sand in front and behind the tire to help gain traction. Make sure to do both sides of the truck as you probably won’t have positraction. Lol…additionally if it’s not too cold you can pee on the ice around the tire. I have gotten many a people unstuck with a little sand and piss.

 

How can I save gas when driving long distances?

 

1. First and foremost Maintain a steady speed.
2. Fill your tire pressure 1 or 2 psi more than the prescribed number.
3. Do not travel with your AC off, especially during long distance journey. With your AC off you will have to lower the car windows and if you are traveling at speed more than 60 miles per hour it is going to affect the aerodynamics of the car and this might affect the fuel consumption a bit.
4. Remove all unnecessary weight from the car.
5. Choose a well maintained road even if it is going to take you more time than a bad road.
6. Have your car checked with a mechanic before you travel.

Hope these points might help you.

Can I keep driving on eco mode? How much does it save on gas?

Economy mode is useful on most conditions but be advised, that some engines need to be “ blown free” by using higher rpm snd full engine load in order to keep the exhaust/ turbo- system declogged. That applies especially to diesel- engines with egr- system. In “ grandfather”— drive mode only those will have need for extended overhaul way before resching estimated end of service- time. ( what absolutely nullifies all eventual gains from eco- mode

 

What are some ways to save on gas annually?

To save gas you should follow the instructions of the manufacturer of your car if your question refers to the gasoline that you spend to make your car run. If your question refers to the natural gas that you use at home to heat up food, water etc then the only recommendation is to watch for any leaks if you suspect that you are losing gas. Fixing those leaks by means of an experienced technician will resolve your problem. Coming back to your car, not over speeding, and not letting the engine on idle for long time in order to keep the air conditioner working or the heater in the Winter these are two important ways to reduce gasoline consumption.

Summary:

Looking to save a few cents per litre on gas? Here are a few tips and tricks that can help you do just that:

1. Check gas prices before you fill up. Many gas stations offer discounts for cash, so it’s worth checking beforehand to see if there’s a station nearby that offers a cheaper price.

2. Use coupons. Many gas stations offer coupons that can be used to save money at the pump. Simply present the coupon when you’re paying and you’ll automatically get a discount.

3. Shop around for gas cards. Some gas cards offer discounts of up to 5 cents per litre, so it’s worth doing some research to see if you could be saving even more money.

4. Drive less. This one is obvious, but the less you drive, the less gas you’ll need to purchase. So, if you can carpool, take public transportation, or walk/bike instead of driving, you’ll save yourself some money in the long run.

5. Keep your car well-maintained. A well-tuned engine can improve your fuel economy by up to 4%, so it’s worth getting your car checked out by a mechanic every

By following these tips, you can easily save money on gas without making major changes to your lifestyle.

Does getting a Tesla make financial sense in terms of cost savings on gas and maintenance?

If you looked at all the cars in the world and calculated which one had the lowest cost per mile transporting someone from Point A to Point B. It would probably not be a Tesla. If people used that criterion for buying a car, then there would be only one car in each class. People buy cars for lots of reasons. If you’re keeping the car for 5 years, some high-mileage hybrids will cost less (absent government subsidies) than a Tesla. Gas is cheap these days. Push it out 10 years or if gas prices go back up, the calculus is different. Your Tesla will outperform that high-mileage hybrid and be a lot more fun to drive. How much is that worth to you?
 
 
 

With rising prices, what are smart ways to save money or good alternatives like horse and carriage to save on gas?

Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada
Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada

This is my plan for tackling the current inflationary environment in the United States:

  • Limit discretionary driving. I have a gas-powered SUV and paid nearly $60 to fill its tank last week. I no longer drive around town just for the hell of it—I have to be strategic. Instead of driving to Target or Walmart for household goods and groceries, I order these necessities for delivery via Amazon. If I do need to drive to one part of town, I hit all the shops in that area at once and act as if I won’t be back for weeks. Ultimately, I am driving with intent—every trip has a purpose.
  • Meal substitution. In my area of the U.S., beef is less expensive than chicken. Thus, I substitute beef for chicken and prepare meals like spaghetti, burgers, and chili. Also, my cost of groceries has risen faster than the cost of a Chipotle burrito, for instance, so I sometimes eat a Chipotle burrito instead of eating at home.
  • Plan for higher utilities. My energy bill is much higher today than it was last year. Since I live in an apartment, each unit’s bill is decided by dividing the energy cost for the entire building by the number of occupied units. Thus, I have very little control over the cost of my monthly bill. I must prepare for this expense and not let it blindside me.
  • Limit unnecessary consumption. Now is not the time to be frivolous with money. All nonessential consumption (i.e., online shoe shopping, going to the movies, etc.) is essentially placed on hold.
  • Invest tactfully. With inflation running hot, the Federal Reserve likely hiking interest rates in the coming months, and macroeconomic and political uncertainty, the stock and crypto markets may fall further before rising once again. Having dry powder (i.e., cash) on hand to take advantage of the situation is not a bad idea. I’ve been building my cash position over the past couple of months, so I can buy assets when others are fearful and need/decide to sell. As a long-term investor, you want to buy into fear and weakness, and I believe we are in that environment.
 

How much money do you save on gas with a hybrid?

If you compare a small, light ICE vehicle, you won’t save anything but if you compare an ICE car of the same weight as an EV then you will save money, possibly as much as $10 every 200 miles.

 
 
 

How much money do you save on gas by paying cash instead of credit in the long-term?

 

Using a 10 cent per gal difference between cash & cc, that comes to about $28 extra per year to use my credit card for my mileage and average MPG. That’s about $2.33/month so not much at all. Then you need to take into account that I get 3% back using my credit card at the pump from my credit card rewards program. That comes to $29/year. Those were round number calculations I did though so we’ll just call it even.

 

Does cruise control actually save gas or is that a myth?

The cruise control itself does not save any gas compared to simply keeping your foot at the same position. However, what cruise control does tend to do, is influence the driving style of the human inside.

The whole point of the cruise control is that you don’t need to constantly control the throttle. And thus you will tend to want to avoid needing to do that while using it. At the most, you will want to disengage the cruise control, to reduce speed slowly when needed, and then re-engage when you can overtake.

The result is that you tend to start looking further ahead, a few cars further than the one directly in front of you. Coming up on a car, you will decide earlier if you can overtake, or if you lift the throttle. This is very positive for reducing fuel consumption.

Many drivers without cruise control will not lift until the last moment, and then often need to brake when they can’t overtake. This is disastrous for the fuel consumption.

There are some special situations where cruise control itself can help reducing fuel consumption. One of those is when using the highest gear at very low throttle. This tends to be the most fuel-efficient configuration, but with so little torque, it can be difficult to keep the speed constant. The cruise control can do that very well. If you can’t manage to drive comfortably at that speed yourself, but the cruise control can, then that is a case where the cruise control directly allows higher fuel efficiency.

Another is when your car doesn’t have a mid-console near your foot, and thus is it difficult to lean your foot against it, helping keep a steady position. In that case, driving without cruise control might lead to constant speed changes as well, and the cruise control could help smooth that. That will also improve fuel efficiency slightly.

But in general, anything the cruise control does, you can do as well… It’s is the driving style that improves fuel efficiency. Cruise control can stimulate a more relax driving style, and that helps. If you already were driving relaxed and smooth, then you’ll not notice any difference.

 

By improving public roads in order to minimize rolling resistance and enhance traction, how much money could be saved on gas consumption and avoidance of traffic accidents?

Patent 6,923,124 has a rolling surface that is 1000 times smoother than typical asphalt. This smooth rolling surface and engineered reverse sag allows steel wheels instead of energy wasting rubber tires. All oil can be avoided (saved) by switching to aerodynamic vehicles rolling on three more perfect rolling surfaces configured in a triangle. There is no reason a car should ever leave the normally traveled portion of the roadway. Designing in 3D means a vehicle can never come off the designated trajectory. Instead of a reactive suspension producing pitch, yaw and roll the guideway produces those motions with precision. This improved “road” (guideway) allows for 180 mph travel at a tiny fraction of the required energy. This in turn allows all transportation to be powered by a 7 foot wide s
 

If I drove 100 miles every day, how long would it take me to pay off my electric car with the money I save on gas?

 
Ok, let’s get serious, and go about doing this the way a person would who’s really trying to save money. Two scenarios: * Aggressive scenario: Buy a used 2014 Nissan Leaf for $8,000. It will only have about 30,000 miles and a range around 85 miles. In my area, electricity will cost 2 cents per mile since our electricity is fairly cheap. Assume the gas car being replaced was getting 30 mpg, so its fuel cost is 11 cents per mile. You are commuting to work each day, 50 miles each way. You don’t have enough range to get home, but your employer offers free charging. (That can happen. My employer does.) Driving 100 miles per day, paying for half and getting half from your employer, will cost $1.00 per day, or $30 per month. The gas car would cost $11 per day or $330 per month. Savings is $300 per
 

What kind of car should I buy that saves on gas?

Short answer:  Toyota corolla or Honda civic

But there is a direct correlation between How you drive, regardless of What you drive. I have a 1998 Chevy Silverado, with a 5.7L (350 cu in) V8, and I can get great MPG’s when I drive it sensibly, and don’t have a ton of unnecessary stuff/gear in the back, or even back seat.

Make sure the tires are set to the appropriate PSI. Always set them to the pressure setting on the inside of the drivers door. On that subject, changing the tire size or wheel size and sidewall thickness will also have a negative effect on MPG.

You would be surprised how much stuff a lot of people have laying in the back of their car, and if they would simply clean it out, they could save money.

Also, keeping your vehicle tuned up and the oil changed per the owners manual will also help keep the MPG high.

Not speeding away from every stop sign or stop light will also help.

Keeping your speed down on the freeway will help.

However, opting to roll the windows down instead of using the A/C to keep cool will actually create drag on the car and lower the efficiency. So crank the heat sucker up to high. Not only with rolling the windows up save fuel, it will also reduce noise and reduce fatigue, so you can drive more comfortably.

 
 

When I have little gas left in my car, is it better to drive fast or slow so that I can get the best distance out of the amount of gas left?

 

Look at all the other mileage techniques that other people have formulated over the years, they all apply. Basically:

  1. Accelerate firmly from a stop. Too slowly, and you waste time in low gears, which are inefficient. Too fast, your engine is burning more fuel than it needs to. 8 – 10 seconds to 40mph is good, get a feel for your car, maybe get a OBD sensor to monitor fuel usage directly (any car after 1990s has one, I think)
  2. Try to get to the top gear, and at lowest RPM. Engine spins the slowest for maximum distance. A little slower is usually ok, especially if the car has bad drag coefficients, or there’s a lot of stops. Accelerating to top gear only to brake for a stop light is a waste of fuel.
  3. Modern cars cut fuel when engine braking. Try to roll as far/long as possible without using the brakes and avoid idling. Braking early, then rolling is better than coming to a complete stop since idling is just a constant drain, and if the light goes green, you save kinetic energy. You can usually feel when the ECU starts fuel delivery again when the engine braking lessens, though forcing downshifts is not recommended due to
    1. Increased wear on a transmission which is more expensive than brake replacement
    2. the spurt of fuel needed to kick the RPMs up. Though it may be needed if you need every last drop. Try downshifting early, if needed.

Try not to use neutral when coasting since the engine is still running. Also, its generally illegal

4. coast up hill, accelerate downhill (where possible). Don’t roll down the hill backwards.

5. If in a Hybrid, try to coast at 0 throttle and 0 regen. Regen, while nice, is fundamentally inefficient due to multiple transformations of energy. At 0 throttle, the engine is off, and no fuel is used. Hybrids generally have low drag, so can go pretty far on flat ground.

6. Tailgating can save some fuel, but it isn’t really safe. A few car lengths of distance can still yield a bit, though don’t overspeed to do so.

7. Turn engine off if you’re gonna be stopped for long periods of time.

 

Is driving slow up on a hill(consume less fuel but takes longer) or fast(consume more fuel but takes less time) better choice for fuel saving ? The hill would be 1 km for reference.

The answer is matching the proper rev range to power to be most efficient.

The real world answer is that if it’s just a kilometer the difference is negligible

Engines are most efficient usually somewhere at the 1/3 to half of the RPM range and at decent load. So if you need to floor it to get on the hill on current gear, downshift, else just press pedal slightly stronger and keep the speed.

As long as you can engine brake downhill the speed doesn’t really matter, just keep the usual traffic speed.

In general accelerating just to slow down later is worse than just keeping steady pace, especially if there are brakes involved.

That’s a good question, but not a simple one to answer.

A car is most efficient when in its highest gear. If you accelerate too slowly, you will spend too much time in the lower gears before you get into the highest gear. Therefore, accelerating excessively slowly is not the most economical technique. Thus, advise to accelerate slowly to save fuel is WRONG!

A few decades ago, BMW did some tests to determine the most economical way to drive their cars. Although that was before fuel injection became common, I’m sure that the rules have not changed very much. They found that for their cars, the most economical technique was to accelerate with a heavy foot (2/3 to 3/4 throttle) but upshift at only 2000 rpm. That works well for a manual transmission, but is generally impossible with an automatic transmission because it will upshift at a considerably higher speed if you use a heavy foot and, just as bad, delay locking the torque converter. So, with an automatic transmission, the most economical technique is probably to accelerate at a moderate rate, i.e., not too fast and not too slowly.

The rules may have changed slightly because of modern electronic fuel injection systems which control the fuel mixture better. They are less likely to deliver an excessively rich mixture at wide throttle openings which occur with a very heavy foot.

With an Otto-cycle engine (4-stroke, spark ignition), the throttle valve is an important source of inefficiency. The power required to suck in air against the vacuum created by the throttle valve wastes fuel. For that reason, an Otto-cycle engine is most efficient when the throttle valve is wide open, or nearly so, provided that the fuel system does not provide an excessively riche mixture under those conditions. That’s why it is most efficient to use a heavy foot and upshift at low speeds, but not at such low speeds that the engine knocks or doesn’t run smoothly since that could cause damage.

The most inefficient thing you can do is use a lower gear than necessary for the power you are using. So, if you delay upshifting until 3000 rpm when, with a heavier foot you could get the same power at 2000 rpm, you are wasting fuel. So, for fuel efficiency, you should upshift at the lowest possible speed that will provide the power you need, but not at such a low speed that the that the engine protests.

In simplistic physics terms, it makes no difference. You create the same amount of kinetic energy either way – and theoretically, that means you must burn the same amount of fuel.

For an internal combustion engine with gears it gets complicated.

A conventional car engine has a range of RPM’s at which the engine operates most efficiently. At lower or higher RPM’s gas consumption is worse.

So the trick is to keep the car in that band.

With a manual gearbox – the best approach is to push hard on the pedal to get the RPM’s into the efficient range – then accelerate more smoothly to the top of that range – then downshift.

If your car has enough gears, you can arrange to stay in the efficient range for all but the initial acceleration in 1st gear.

However, with an automatic (and especially automatics with not many gears in their gearbox) – you have no direct control over that – so it becomes a matter of tricking the gearbox into doing what you want. With modern gearboxes, you’d hope that the manufacturer set the shift points for efficiency – but it depends on the car. For a sports car they probably optimized the shift pattern for best 0–60 time – so they’d keep the engine in the “power zone” of RPM’s rather than in the “efficiency zone”…for a family sedan, the reverse would be the case. Many cars have a “sport” button which essentially lets you choose between keeping the engine in the power band or the efficiency band.

But even on the “economy” setting, the software won’t be able to prevent you from demanding performance that drives it out of the economy range.

It also varies depending on the air temperature – when the air is cold, it’s more dense and the fuel management software can burn fuel in larger quantities than on hot days – and that may influence the decision.

There are other considerations too. If you accelerate and brake gently then it takes longer to get you where you’re going. This means that the air conditioner, radio, lights, computer(s), etc are running for longer…and that takes energy too.

On the other hand – if you continually red-line the engine, it’ll wear out faster and a worn out engine uses more gas than a good engine.

Honestly – the answer is horribly complicated – and it varies from car to car.

To Conclude:

Looking to save a few cents per litre on gas? Here are a few tips and tricks that can help you do just that:

1. Check gas prices before you fill up. Many gas stations offer discounts for cash, so it’s worth checking beforehand to see if there’s a station nearby that offers a cheaper price.

2. Use coupons. Many gas stations offer coupons that can be used to save money at the pump. Simply present the coupon when you’re paying and you’ll automatically get a discount.

3. Shop around for gas cards. Some gas cards offer discounts of up to 5 cents per litre, so it’s worth doing some research to see if you could be saving even more money.

4. Drive less. This one is obvious, but the less you drive, the less gas you’ll need to purchase. So, if you can carpool, take public transportation, or walk/bike instead of driving, you’ll save yourself some money in the long run.

5. Keep your car well-maintained. A well-tuned engine can improve your fuel economy by up to 4%, so it’s worth getting your car checked out by a mechanic every

Top 10 luxury cars that are completely overpriced considering the poor workmanship and lack of features?

Programming Languages used for Autopilot in Self Driving Cars like Tesla, Audi, BMW, Mercedes Benz, Volvo, Infiniti

Sources:

1- Quora

2- Reddit

3- https://vehiclecare.in/blaze/how-to-save-fuel-13-fuel-saving-tips/


Well, this may or not be cost efficient. It might actually be cheaper to buy new cars every 100,000 miles or so. But here we go.

  1. Get a good vehicle. Modern pickup trucks and SUV’s are not good vehicles. Volvos are affordable and are well built. So are BMWs and Mercedes. Look at the van the American Pickers drive – it’s a Mercedes. I wouldn’t even rule out many American production cars.
  2. Change your oil as frequently as it says in the owner’s manual. And don’t scrimp. You don’t have to get ultra expensive synthetics, but get something more than the bare minimum.
  3. Do other automotive maintenance as frequently as it says in the owner’s manual. Car parts go bad. It’s not just tires either.
  4. Drive carefully. Accelerate and decelerate smoothly. Drive at or near the speed limit. My sister was using our parent’s old ’96 Saturn until about two years ago when some idiot t-boned her by running a stop sign.
  5. Speaking of Saturns, which were great in cold climates because they didn’t use a lot of metal, if you live anywhere they use road salt, keep the car as clean and rust-free as possible. Best to drive in Texas – Texas has a good climate for cars. They don’t know what road salt is in Texas.
  6. Park it in a garage. This is optional if you live somewhere with good car weather. Like Texas.

What is the tech stack behind Google Search Engine?

What are the pros and cons of working as a software engineer for Google or Microsoft versus starting your own startup like Zoom or Uber or Airbnb did when they started out?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

What is the tech stack behind Google Search Engine?

Google Search is one of the most popular search engines on the web, handling over 3.5 billion searches per day. But what is the tech stack that powers Google Search?

The PageRank algorithm is at the heart of Google Search. This algorithm was developed by Google co-founders Larry Page and Sergey Brin and patented in 1998. It ranks web pages based on their quality and importance, taking into account things like incoming links from other websites. The PageRank algorithm has been constantly evolving over the years, and it continues to be a key part of Google Search today.

However, the PageRank algorithm is just one part of the story. The Google Search Engine also relies on a sophisticated infrastructure of servers and data centers spread around the world. This infrastructure enables Google to crawl and index billions of web pages quickly and efficiently. Additionally, Google has developed a number of proprietary technologies to further improve the quality of its search results. These include technologies like Spell Check, SafeSearch, and Knowledge Graph.

The technology stack that powers the Google Search Engine is immensely complex, and includes a number of sophisticated algorithms, technologies, and infrastructure components. At the heart of the system is the PageRank algorithm, which ranks pages based on a number of factors, including the number and quality of links to the page. The algorithm is constantly being refined and updated, in order to deliver more relevant and accurate results. In addition to the PageRank algorithm, Google also uses a number of other algorithms, including the Latent Semantic Indexing algorithm, which helps to index and retrieve documents based on their meaning. The search engine also makes use of a massive infrastructure, which includes hundreds of thousands of servers around the world.  While google is the dominant player in the search engine market, there are a number of other well-established competitors, such as Microsoft’s Bing search engine and Duck Duck Go.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

The original Google algorithm was called PageRank, named after inventor Larry Page (though, fittingly, the algorithm does rank web pages). 

What is the tech stack behind Google Search Engine?
What is the tech stack behind Google Search Engine?

After 17 years of work by many software engineers, researchers, and statisticians, Google search uses algorithms upon algorithms upon algorithms.

How does Google’s indexing algorithm (so it can do things like fuzzy string matching) technically structure its index?

  • There is no single technique that works.
  • At a basic level, all search engines have something like an inverted index, so you can look up words and associated documents. There may also be a forward index.
  • One way of constructing such an index is by stemming words. Stemming is done with an algorithm than boils down words to their basic root. The most famous stemming algorithm is the Porter stemmer.
  • However, there are other approaches. One is to build n-grams, sequences of n letters, so that you can do partial matching. You often would choose multiple n’s, and thus have multiple indexes, since some n-letter combinations are common (e.g., “th”) for small n’s, but larger values of n undermine the intent.
  •  don’t know that we can say “nothing absolute is known”. Look at misspellings. Google can resolve a lot of them. This isn’t surprising; we’ve had spellcheckers for at least 40 years. However, the less common a misspelling, the harder it is for Google to catch.
  • One cool thing about Google is that they have been studying and collecting data on searches for more than 20 years. I don’t mean that they have been studying searching or search engines (although they have been), but that they have been studying how people search. They process several billion search queries each day. They have developed models of what people really want, which often isn’t what they say they want. That’s why they track every click you make on search results… well, that and the fact that they want to build effective models for ad placement.
  • Each year, Google changes its search algorithm around 500–600 times. While most of these changes are minor, Google occasionally rolls out a “major” algorithmic update (such as Google Panda and Google Penguin) that affects search results in significant ways.

    For search marketers, knowing the dates of these Google updates can help explain changes in rankings and organic website traffic and ultimately improve search engine optimization. Below, we’ve listed the major algorithmic changes that have had the biggest impact on search.

  • Originally, Google’s indexing algorithm was fairly simple.

    It took a starting page and added all the unique (if the word occurred more than once on the page, it was only counted once) words on the page to the index or incremented the index count if it was already in the index.

    The page was indexed by the number of references the algorithm found to the specific page. So each time the system found a link to the page on a newly discovered page, the page count was incremented.

    When you did a search, the system would identify all the pages with those words on it and show you the ones that had the most links to them.

    As people searched and visited pages from the search results, Google would also track the pages that people would click to from the search page. Those that people clicked would also be identified as a better quality match for that set of search terms. If the person quickly came back to the search page and clicked another link, the match quality would be reduced.

    Now, Google is using natural language processing, a method of trying to guess what the user really wants. From that it it finds similar words that might give a better set of results based on searches done by millions of other people like you. It might assume that you really meant this other word instead of the word you used in your search terms. It might just give you matches in the list with those other words as well as the words you provided.

    It really all boils down to the fact that Google has been monitoring a lot of people doing searches for a very long time. It has a huge list of websites and search terms that have done the job for a lot of people.

    There are a lot of proprietary algorithms, but the real magic is that they’ve been watching you and everyone else for a very long time.

What programming language powers Google’s search engine core?

C++, mostly. There are little bits in other languages, but the core of both the indexing system and the serving system is C++.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

How does Google handle the technical aspect of fuzzy matching? How is the index implemented for that?

  • With n-grams and word stemming. And correcting bad written words. N-grams for partial matching anything.

Use a ping service. Ping services can speed up your indexing process.

  1. Search Google for “pingmylinks”
  2. Click on the “add url” in the upper left corner.
  3. Submit your website and make sure to use all the submission tools and your site should be indexed within hours.

Our ranking algorithm simply doesn’t rank google.com highly for the query “search engine.” There is not a single, simple reason why this is the case. If I had to guess, I would say that people who type “search engine” into Google are usually looking for general information about search engines or about alternative search engines, and neither query is well-answered by listing google.com.

To be clear, we have never manually altered the search results for this (or any other) specific query.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

When I tried the query “search engine” on Bing, the results were similar; bing.com was #5 and google.com was #6.

What is the search algorithm used by the Google search engine? What is its complexity?

The basic idea is using an inverted index. This means for each word keeping a list of documents on the web that contain it.

Responding to a query corresponds to retrieval of the matching documents (This is basically done by intersecting the lists for the corresponding query words), processing the documents (extracting quality signals corresponding to the doc, query pair), ranking the documents (using document quality signals like Page Rank and query signals and query/doc signals) then returning the top 10 documents.

Here are some tricks for doing the retrieval part efficiently:
– distribute the whole thing over thousands and thousands of machines
– do it in memory
– caching
– looking first at the query word with the shortest document list
– keeping the documents in the list in reverse PageRank order so that we can stop early once we find enough good quality matches
– keep lists for pairs of words that occur frequently together
– shard by document id, this way the load is somewhat evenly distributed and the intersection is done in parallel
– compress messages that are sent across the network
etc

Jeff Dean in this great talk explains quite a few bits of the internal Google infrastructure. He mentions a few of the previous ideas in the talk.

He goes through the evolution of the Google Search Serving Design and through MapReduce while giving general advice about building large scale systems.

https://www.youtube.com/watch?v=modXC5IWTJI&t=30s
 
 

Here’s a link to his slides:

As for complexity, it’s pretty hard to analyze because of all the moving parts, but Jeff mentions that the the latency per query is about 0.2 s and that each query touches on average 1000 computers.

Is Google’s LaMDA conscious? A philosopher’s view (theconversation.com)

LaMDA is Google’s latest artificial intelligence (AI) chatbot. Blake Lemoine, a Google AI engineer, has claimed it is sentient. He’s been put on leave after publishing his conversations with LaMDA.

If Lemoine’s claims are true, it would be a milestone in the history of humankind and technological development.

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Google strongly denies LaMDA has any sentient capacity.

Fun facts about Google Search Engine Competitors

r/dataisbeautiful - [OC] Google dominates the search market with a 91.9% market share

original post here

Data Source: statcounterGS

Tools Used: Excel & PowerPoint

Edit: Note that the data for Baidu/China is likely higher. How statcounterGS collects the data might understate # users from China.

Methodology

Baidu is popular in China, Yandex is popular in Russia.

Yandex is great for reverse image searches, google just can’t compete with yandex in that category.

Normal Google reverse search is a joke (except for finding a bigger version of a pic, it’s good for that), but Google Lens can be as good or sometimes better at finding similar images or locations than Yandex depending on the image type. Always good to try both, and also Bing can be decent sometimes. 

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Bing has been profitable since 2015 even with less than 3% of the market share. So just imagine how much money Google is taking in.

Firstly: Yahoo, DuckDuckGo, Ecosia, etc. all use Bing to get their search results. Which means Bing’s usage is more than the 3% indicated.

Secondly: This graph shows overall market share (phones and PCs). But, search engines make most of their money on desktop searches due to more screen space for ads. And Bing’s market share on desktop is WAY bigger, its market share on phones is ~0%. It’s American desktop market share is 10-15%. That is where the money is.

What you are saying is in fact true though. We make trillions of web searches – which means even three percent market-share equals billions of hits and a ton of money.

I like duck duck go. And they have good privacy features. I just wish their maps were better because if I’m searching a local restaurant nothing is easier than google to transition from the search to the map to the webpage for the company. But for informative searches I think it gives a more objective, less curated return.

Use Ecosia and profits go to reforestation efforts!

Turns out people don’t care about their privacy, especially if it gets them results.

I recently switched to using brave browser and duck duck go and I basically can’t tell the difference in using Google and chrome.

The only times I’ve needed to use Google are for really specific searches where duck duck go doesn’t always seem to give the expected results. But for daily browsing it’s absolutely fine and far far better for privacy.

There is a lot that happens between the moment a user types something in the input field and when they get their results.

Google Search has a high-level overview, but the gist of it is that there are dozens of sub systems involved and they all work extremely fast. The general idea is that search is going to process the query, try to understand what the user wants to know/accomplish, rank these possibilities, prepare a results page that reflects this and render it on the user’s device.

I would not qualify the UI of simple. Yes, the initial state looks like a single input field on an otherwise empty page. But there is already a lot going on in that input field and how it’s presented to the user. And then, as soon as the user interacts with the field, for instance as they start typing, there’s a ton of other things that happen – Search is able to pre-populate suggested queries really fast. Plus there’s a whole “syntax” to search with operators and what not, there’s many different modes (image, news, etc…).

One recent iteration of Google search is Google Lens: Google Lens interface is even simpler than the single input field: just take a picture with your phone! But under the hood a lot is going on. Source.

Conclusion:

The Google search engine is a remarkable feat of engineering, and its capabilities are only made possible by the use of cutting-edge technology. At the heart of the Google search engine is the PageRank algorithm, which is used to rank web pages in order of importance. This algorithm takes into account a variety of factors, including the number and quality of links to a given page. In order to effectively crawl and index the billions of web pages on the internet, Google has developed a sophisticated infrastructure that includes tens of thousands of servers located around the world. This infrastructure enables Google to rapidly process search queries and deliver relevant results to users in a matter of seconds. While Google is the dominant player in the search engine market, there are a number of other search engines that compete for users, including Bing and Duck Duck Go. However, none of these competitors have been able to replicate the success of Google, due in large part to the company’s unrivaled technological capabilities.

 

Phone screen shows text: LaMDA: our breakthrough conversation technology

What are the Greenest or Least Environmentally Friendly Programming Languages?

How do we know that the Top 3 Voice Recognition Devices like Siri Alexa and Ok Google are not spying on us?

Machine Learning Engineer Interview Questions and Answers

 

Programming, Coding and Algorithms Questions and Answers

What is the single most influential book every Programmers should read

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Programming, Coding and Algorithms Questions and Answers.

Coding is a complex process that requires precision and attention to detail. While there are many resources available to help learn programming, it is important to avoid making some common mistakes. One mistake is assuming that programming is easy and does not require any prior knowledge or experience. This can lead to frustration and discouragement when coding errors occur. Another mistake is trying to learn too much at once. Coding is a vast field with many different languages and concepts. It is important to focus on one area at a time and slowly build up skills. Finally, another mistake is not practicing regularly. Coding is like any other skill- it takes practice and repetition to improve. By avoiding these mistakes, students will be well on their way to becoming proficient programmers.

In addition to avoiding these mistakes, there are certain things that every programmer should do in order to be successful. One of the most important things is to read coding books. Coding books provide a comprehensive overview of different languages and concepts, and they can be an invaluable resource when starting out. Another important thing for programmers to do is never stop learning. Coding is an ever-changing field, and it is important to keep up with new trends and technologies.

Coding is a process of transforming computer instructions into a form a computer can understand. Programs are written in a particular language which provides a structure for the programmer and uses specific instructions to control the sequence of operations that the computer carries out. The programming code is written in and read from a text editor, which in turn is used to produce a software program, application, script, or system.

When you’re starting to learn programming, it’s important to have the right tools and resources at your disposal. Coding can be difficult, but with the proper guidance it can also be rewarding.

This blog is an aggregate of  clever questions and answers about Programming, Coding, and Algorithms. This is a safe place for programmers who are interested in optimizing their code, learning to code for the first time, or just want to be surrounded by the coding environment. 

CodeMonkey Discount Code

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

155 x 65

” width=”150″ height=”63″>


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

I think, the most common mistakes I witnessed or made myself when learning is:

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

1: Trying to memorize every language construction. Do not rely on your memory, use stack overflow.

2: Spend a lot of time solving an issue yourself, before you google it. Just about every issue you can stumble upon, is in 99.99% cases already has been solved by someone else. Learn to properly search for solutions first.

3: Spending a couple of days on a task and realizing it was not worth it. If the time you spend on a single problem is more than halve an hour then you probably doing it wrong, search for alternatives.

4: Writing code from a scratch. Do not reinvent a bicycle, if you need to write a blog, just search a demo application in a language and a framework you chose, and build your logic on top of it. Need some other feature? Search another demo incorporating this feature, and use its code.

In programming you need to be smart, prioritize your time wisely. Diving in a deep loopholes will not earn you good money.

Because implicit is better than explicit¹.

def onlyAcceptsFooable(bar): 

   bar.foo() 

Congratulations, you have implicitly defined an interface and a function that requires its parameter to fulfil that interface (implicitly).

How do you know any of this? Oh, no problem, just try using the function, and if it fails during runtime with complaints about your bar missing a foo method, you will know what you did wrong.  By Paulina Jonušaitė

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

List of Freely available programming books – What is the single most influential book every Programmers should read

Source: Wikipedia

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Best != easy and easy != best. Interpreted BASIC is easy, but not great for programming anything more complex than tic-tac-toe. C++, C#, and Java are very widely used, but none of them are what I would call easy.

Is Python an exception? It’s a fine scripting language if performance isn’t too critical. It’s a fine wrapper language for libraries coded in something performant like C++. Python’s basics are pretty easy, but it is not easy to write large or performant programs in Python.

Like most things, there is no shortcut to mastery. You have to accept that if you want to do anything interesting in programming, you’re going to have to master a serious, not-easy programming language. Maybe two or three. Source.

Type declarations mainly aren’t for the compiler — indeed, types can be inferred and/or dynamic so you don’t have to specify them.

They’re there for you. They help make code readable. They’re a form of active, compiler-verified documentation.

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

For example, look at this method/function/procedure declaration:

locate(tr, s) { … } 

  • What type is tr?
  • What type is s?
  • What type, if any, does it return?
  • Does it always accept and return the same types, or can they change depending on values of tr, s, or system state?

If you’re working on a small project — which most JavaScript projects are — that’s not a problem. You can look at the code and figure it out, or establish some discipline to maintain documentation.

If you’re working on a big project, with dozens of subprojects and developers and hundreds of thousands of lines of code, it’s a big problem. Documentation discipline will get forgotten, missed, inconsistent or ignored, and before long the code will be unreadable and simple changes will take enormous, frustrating effort.

But if the compiler obligates some or all type declarations, then you say this:

Node locate(NodeTree tr, CustomerName s) { … }

Now you know immediately what type it returns and the types of the parameters, you know they can’t change (except perhaps to substitutable subtypes); you can’t forget, miss, ignore or be inconsistent with them; and the compiler will guarantee you’ve got the right types.

That makes programming — particularly in big projects — much easier. Source: Dave Voorhis

  • COBOL. Verbose like no other, excess structure, unproductive, obtuse, limited, rigid.
  • JavaScript. Insane semantics, weak typing, silent failure. Thankfully, one can use transpilers for more rationally designed languages to target it (TypeScript, ReScript, js_of_ocaml, PureScript, Elm.)
  • ActionScript. Macromedia Flash’s take on ECMA 262 (i.e., ~JavaScript) back in the day. It’s static typing was gradual so the compiler wasn’t big on type error-catching. This one’s thankfully deader than Disco.
  • BASIC. Mandatory line numbering. Zero standardization. Not even a structured language — you’ve never seen that much spaghetti code.
  • In the real of dynamically typed languages, anything that is not in the Lisp family. To me, Lisps just are a more elegant and richer-featured than the rest.  Alexander feterman

Object-oriented programming is “a programming model that organizes software design around data, or objects, rather than functions and logic.”

Most games are made of “objects” like enemies, weapons, power-ups etc. Most games map very well to this paradigm. All the objects are in charge of maintaining their own state, stats and other data. This makes it incredibly easier for a programmer to develop and extend video games based on this paradigm.

I could go on, but I’d need an easel and charts. Chrish Nash

Ok…I think this is one of the most important questions to answer. According to the my personal experience as a Programmer, I would say you must learn following 5 universal core concepts of programming to become a successful Java programmer.

(1) Mastering the fundamentals of Java programming Language – This is the most important skill that you must learn to become successful java programmer. You must master the fundamentals of the language, specially the areas like OOP, Collections, Generics, Concurrency, I/O, Stings, Exception handling, Inner Classes and JVM architecture.

Recommended readings are OCA Java SE 8 Programmer by by Kathy Sierra and Bert Bates (First read Head First Java if you are a new comer ) and Effective Java by Joshua Bloch.

(2) Data Structures and Algorithms – Programming languages are basically just a tool to solve problems. Problems generally has data to process on to make some decisions and we have to build a procedure to solve that specific problem domain. In any real life complexity of the problem domain and the data we have to handle would be very large. That’s why it is essential to knowing basic data structures like Arrays, Linked Lists, Stacks, Queues, Trees, Heap, Dictionaries ,Hash Tables and Graphs and also basic algorithms like Searching, Sorting, Hashing, Graph algorithms, Greedy algorithms and Dynamic Programming.

Recommended readings are Data Structures & Algorithms in Java by Robert Lafore (Beginner) , Algorithms Robert Sedgewick (intermediate) and Introduction to Algorithms-MIT press by CLRS (Advanced).

(3) Design Patterns – Design patterns are general reusable solution to a commonly occurring problem within a given context in software design and they are absolutely crucial as hard core Java Programmer. If you don’t use design patterns you will write much more code, it will be buggy and hard to understand and refactor, not to mention untestable and they are really great way for communicating your intent very quickly with other programmers.

Recommended readings are Head First Design Patterns Elisabeth Freeman and Kathy Sierra and Design Patterns: Elements of Reusable by Gang of four.

(4) Programming Best Practices – Programming is not only about learning and writing code. Code readability is a universal subject in the world of computer programming. It helps standardize products and help reduce future maintenance cost. Best practices helps you, as a programmer to think differently and improves problem solving attitude within you. A simple program can be written in many ways if given to multiple developers. Thus the need to best practices come into picture and every programmer must aware about these things.

Recommended readings are Clean Code by Robert Cecil Martin and Code Complete by Steve McConnell.

(5) Testing and Debugging (T&D) – As you know about the writing the code for specific problem domain, you have to learn how to test that code snippet and debug it when it is needed. Some programmers skip their unit testing or other testing methodology part and leave it to QA guys. That will lead to delivering 80% bugs hiding in your code to the QA team and reduce the productivity and risking and pushing your project boundaries to failure. When a miss behavior or bug occurred within your code when the testing phase. It is essential to know about the debugging techniques to identify that bug and its root cause.

Recommended readings are Debugging by David Agans and A Friendly Introduction to Software Testing by Bill Laboon.

I hope these instructions will help you to become a successful Java Programmer. Here i am explain only the universal core concepts that you must learn as successful programmer. I am not mentioning any technologies that Java programmer must know such as Spring, Hibernate, Micro-Servicers and Build tools, because that can be change according to the problem domain or environment that you are currently working on…..Happy Coding!

 

Hard to be balanced on this one.

They are useful to know. If ever you need to use, or make a derivative of algorithm X, then you’ll be glad you took the time.

If you learn them, you’ll learn general techniques: sorting, trees, iteration, transformation, recursion. All good stuff.

You’ll get a feeling for the kinds of code you cannot write if you need certain speeds or memory use, given a certain data set.

You’ll pass certain kinds of interview test.

You’ll also possibly never use them. Or use them very infrequently.

If you mention that on here, some will say you are a lesser developer. They will insist that the line between good and not good developers is algorithm knowledge.

That’s a shame, really.

In commercial work, you never start a day thinking ‘I will use algorithm X today’.

The work demands the solution. Not the other way around.

This is yet another proof that a lot of technical sounding stuff is actual all about people. Their investment in something. Need for validation. Preference.

The more you know in development, the better. But I would not prioritize algorithms right at the top, based on my experience. Alan Mellor

So you’re inventing a new programming language and considering whether to write either a compiler or an interpreter for your new language in C or C++?

The only significant disadvantage of C++ is that in the hands of bad programmers, they can create significantly more chaos in C++ than they can in C.

But for experienced C++ programmers, the language is immensely more powerful than C and writing clear, understandable code in C++ can be a LOT easier.

INCIDENTALLY:

If you’re going to actually do this – then I strongly recommend looking at a pair of tools called “flex” and “bison” (which are OpenSourced versions of the more ancient “lex” and “yacc”). These tools are “compiler-compilers” that are given a high level description of the syntax of your language – and automatically generate C code (which you can access from C++ without problems) to do the painful part of generating a lexical analyzer and a syntax parser. Steve Baker

Did you know you can google this answer yourself? Search for “c++ private keyword” and follow the link to access specifiers, which goes into great detail and has lots of examples. In case google is down, here’s a brief explanation of access specifiers:

  • The private access specifier in a class or struct definition makes declarations that occur after the specifier. A private declaration is visible only inside the class/struct, and not in derived classes or structs, and not from outside.
  • The protected access specifier makes declarations visible in the current class/struct and also in derived classes and structs, but not visible from outside. protected is not used very often and some wise people consider it a code smell.
  • The public access specifier makes declarations visible everywhere.
  • You can also use access specifiers to control all the items in a base class. By Kurt Guntheroth

Rust programmers do mention the obvious shortcomings of the language.

Such as that a lot of data structures can’t be written without unsafe due to pointer complications.

Or that they haven’t agreed what it means to call unsafe code (although this is somewhat of a solved problem, just like calling into assembler from C0 in the sysbook).

The main problem of the language is that it doesn’t absolve the programmers from doing good engineering.

It just catches a lot of the human errors that can happen despite such engineering. Jonas Oberhauser.

Comparing cross-language performance of real applications is tricky. We usually don’t have the resources for writing said applications twice. We usually don’t have the same expertise in multiple languages. Etc. So, instead, we resort to smaller benchmarks. Occasionally, we’re able to rewrite a smallish critical component in the other language to compare real-world performance, and that gives a pretty good insight. Compiler writers often also have good insights into the optimization challenges for the language they work on.

My best guess is that C++ will continue to have a small edge in optimizability over Rust in the long term. That’s because Rust aims at a level of memory safety that constrains some of its optimizations, whereas C++ is not bound to such considerations. So I expect that very carefully written C++ might be slightly faster than equivalent very carefully written Rust.

However, that’s perhaps not a useful observation. Tiny differences in performance often don’t matter: The overall programming model is of greater importance. Since both languages are pretty close in terms of achievable performance, it’s going to be interesting watching which is preferable for real-life engineering purposes: The safe-but-tightly-constrained model of Rust or the more-risky-but-flexible model of C++.  By David VandeVoorde

  1. Lisp does not expose the underlying architecture of the processor, so it can’t replace my use of C and assembly.
  2. Lisp does not have significant statistical or visualization capabilities, so it can’t replace my use of R.
  3. Lisp was not built with unix filesystems in mind, so it’s not a great choice to replace my use of bash.
  4. Lisp has nothing at all to do with mathematical typesetting, so won’t be replacing LATEXLATEX anytime soon.
  5. And since I use vim, I don’t even have the excuse of learning lisp so as to modify emacs while it’s running.

In fewer words: for the tasks I get paid to do, lisp doesn’t perform better than the languages I currently use. By Barry RoundTree

What are some things that only someone who has been programming 20-50 years would know?

The truth of the matter gained through the multiple decades of (my) practice (at various companies) is ugly, not convenient and is not what you want to hear.

  1. The technical job interviews are non indicative and non predictive waste of time, that is, to put it bluntly, garbage (a Navy Seal can be as brave is (s)he wants to be during the training, but only when the said Seal meets the bad guys face to face on the front line does her/his true mettle can be revealed).
  2. An average project in an average company, both averaged the globe over, is staffed with mostly random, technically inadequate, people who should not be doing what they are doing.
  3. Such random people have no proper training in mathematics and computer science.
  4. As a result, all the code generated by these folks out there is flimsy, low quality, hugely not efficient, non scalable, non maintainable, hardly readable steaming pile of spaghetti mess – the absence of structure, order, discipline and understanding in one’s mind is reflected at the keyboard time 100 percent.
  5. It is a major hail mary, a hallelujah and a standing ovation to the genius of Alan Turing for being able to create a (Turing) Machine that, on the one hand, can take this infinite abuse and, on the other hand, being nothing short of a miracle, still produce binaries that just work. Or so they say.
  6. There is one and only one definition of a computer programmer: that of a person who combines all of the following skills and abilities:
    1. the ability to write a few lines of properly functioning (C) code in the matter of minutes
    2. the ability to write a few hundred lines of properly functioning (C) code in the matter of a small number of hours
    3. the ability to write a few thousand lines of properly functioning (C) code in the matter of a small number of weeks
    4. the ability to write a small number of tens of thousands of lines of properly functioning (C) code in the matter of several months
    5. the ability to write several hundred thousand lines of properly functioning (C) code in the matter of a small number of years
    6. the ability to translate a given set of requirements into source code that is partitioned into a (large) collection of (small and sharp) libraries and executables that work well together and that can withstand a steady-state non stop usage for at least 50 years
  7. It is this ability to sustain the above multi-year effort during which the intellectual cohesion of the output remains consistent and invariant is what separates the random amateurs, of which there is a majority, from the professionals, of which there is a minority in the industry.
  8. There is one and only one definition of the above properly functioning code: that of a code that has a check mark in each and every cell of the following matrix:
    1. the code is algorithmically correct
    2. the code is easy to read, comprehend, follow and predict
    3. the code is easy to debug
      1. the intellectual effort to debug code, symbolized as E(d)E(d), is strictly larger than the intellectual effort to write code, symbolized as E(w)E(w). That is: E(d)>E(w)E(d)>E(w). Thus, it is entirely possible to write a unit of code that even you, the author, can not debug
    4. the code is easy to test
      1. in different environments
    5. the code is efficient
      1. meaning that it scales well performance-wise when the size of the input grows without bound in both configuration and data
    6. the code is easy to maintain
      1. the addition of new and the removal or the modification of the existing features should not take five metric tons of blood, three years and a small army of people to implement and regression test
      2. the certainty of and the confidence in the proper behavior of the system thus modified should by high
      3. (read more about the technical aspects of code modification in the small body of my work titled “Practical Design Patterns in C” featured in my profile)
      4. (my claim: writing proper code in general is an optimization exercise from the theory of graphs)
    7. the code is easy to upgrade in production
      1. lifting the Empire State Building in its entirety 10 feet in the thin blue air and sliding a bunch of two-by-fours underneath it temporarily, all the while keeping all of its electrical wires and the gas pipes intact, allowing the dwellers to go in and out of the building and operating its elevators, should all be possible
      2. changing the engine and the tires on an 18-wheeler truck hauling down a highway at 80 miles per hour should be possible
  9. A project staffed with nothing but technically capable people can still fail – the team cohesion and the psychological compatibility of team members is king. This is raw and unbridled physics – a team, or a whole, is more than the sum of its members, or parts.
  10. All software project deadlines without exception are random and meaningless guesses that have no connection to reality.
  11. Intelligence does not scale – a million fools chained to a million keyboards will never amount to one proverbial Einstein. Source
 

A function pulls a computation out of your program and puts it in a conceptual box labeled by the function’s name. This lets you use the function name in a computation instead of writing out the computation done by the function.

Writing a function is like defining an obscure word before you use it in prose. It puts the definition in one place and marks it out saying, “This is the definition of xxx”, and then you can use the one word in the text instead of writing out the definition.

Even if you only use a word once in prose, it’s a good idea to write out the definition if you think that makes the prose clearer.

Even if you only use a function once, it’s a good idea to write out the function definition if you think it will make the code clearer to use a function name instead of a big block of code. Source.

Conditional statements of the form if this instance is type T then do X can generally — and usually should — be removed by appropriate use of polymorphism.

All conditional statements might conceivably be replaced in that fashion, but the added complexity would almost certainly negate its value. It’s best reserved for where the relevant types already exist.

Creating new types solely to avoid conditionals sometimes makes sense (e.g. maybe create distinct nullable vs not-nullable types to avoid if-null/if-not-null checks) but usually doesn’t. Source.

Something bad happens as your Java code runs.

Throw an exception.

The following lines after the throw do not run, saving them from the bad thing.

control is handed back up the call stack until Java runtime finds a catch() statement that matches the exception.

The code resumes running from there. Source: Allan Mellor

Google has better programmers, and they’ve been working on the problem space longer than either Spotify or the other providers have existed.

YouTube has a year and a half on Spotify, for example, and they’ve been employing a lot of “organ bank” engineers from Google proper, for various problems — like the “similar to this one“ problem — and the engineers doing the work are working on much larger teams, overall.

Spotify is resource starved, because they really aren’t raking in the same ratio of money that YouTube does. By Terry Lambert

Over the past two decades, Java has moved from a fairly simple ecosystem, with the relatively straightforward ANT build tool, to a sophisticated ecosystem with Maven or gradle basically required. As a result, this kind of approach doesn’t really work well anymore. I highly recommend that you download the community edition of IntelliJ IDEA; this is a free version of a great commercial IDE. By Joshua Gross

Best bet is to turn it into a record type as a pure data structure. Then you can start to work on that data. You might do that direct, or use it to construct some OOP objects with application specific behaviours on them. Up to you.

You can decide how far to take layering as well. Small apps work ok with the data struct in the exact same format as the JSON data passed around. But you might want to isolate that and use a mapping to some central domain model. Then if the JSON schema changes, your domain model won’t.

Libraries such as Jackson and Gson can handle the conversion. Many frameworks have something like it built in, so you get delivered a pure data struct ‘object’ containing all the data that was in the JSON

Things like JSON Validator and JSV Schemas can help you validate the response JSON if need be. By Alan Mellor

Keith Adams already gave an excellent overview of Slack’s technology stack so I will do my best to add to his answer.

Products that make up Slack’s tech stack include: Amazon (CloudFront, CloudSearch, EMR, Route 53, Web Services), Android Studio, Apache (HTTP Server, Kafka, Solr, Spark, Web Server), Babel, Brandfolder, Bugsnag, Burp Suite, Casper Suite, Chef, DigiCert, Electron, Fastly, Git, HackerOne, JavaScript, Jenkins, MySQL, Node.js, Objective-C, OneLogin, PagerDuty, PHP, Redis, Smarty, Socket, Xcode, and Zeplin.

Additionally, here’s a list of other software products that Slack is using internally:

  • Marketing: AdRoll, Convertro, MailChimp, SendGrid
  • Sales and Support: Cnflx, Front, Typeform, Zendesk
  • Analytics: Google Analytics, Mixpanel, Optimizely, Presto
  • HR: AngelList Jobs, Culture Amp, Greenhouse, Namely
  • Productivity: ProductBoard, Quadro, Zoom, Slack (go figure!)

For a complete list of software used by Slack, check out: Slack’s Stack on Siftery

Some other fun facts about Slack:

  • Slack is used by 55% of Unicorns (and 59% of B2B Unicorns)
  • Slack has 85% market share in Siftery’s Instant Messaging category on Siftery
  • Slack is used by 42% of both Y Combinator and 500 Startups companies
  • 35% of companies in the Sharing Economy use Slack

(Disclaimer: The above data was pulled from Siftery and has been verified by individuals working at Slack) By Gerry Giacoman Colyer

Programmers should use recursion when it is the cleanest way to define a process. Then, WHEN AND IF IT MATTERS, they should refine the recursion and transform it into a tail recursion or a loop. When it doesn’t matter, leave it alone. Jamie Lawson
 
 

Your phone runs a version of Linux, which is programmed in C. Only the top layer is programmed in java, because performance usually isn’t very important in that layer.

Your web browser is programmed in C++ or Rust. There is no java anywhere. Java wasn’t secure enough for browser code (but somehow C++ was? Go figure.)

Your Windows PC is programmed mostly in C++. Windows is very old code, that is partially C. There was an attempt to recode the top layer in C#, but performance was not good enough, and it all had to be recoded in C++. Linux PCs are coded in C.

Your intuition that most things are programmed in java is mistaken. Kurt Guntheroth

That’s not possible in Java, or at least the language steers you away from attempting that.

Global variables have significant disadvantages in terms of maintainability, so the language itself has no way of making something truly global.

The nearest approach would be to abuse some language features like so:

  • public class Globals { 
  • public static int[] stuff = new int [10]; 

Then you can use this anywhere with

  • Globals.stuff[0] = 42; 

Java isn’t Python, C nor JavaScript. It’s reasonably opinionated about using Object Oriented Programming, which the above snippets are not examples of.

This also uses a raw array, which is a fixed size in Java. Again, not very useful, we prefer ArrayList for most purposes, which can grow.

I’d recommend the above approach if and only if you have no alternatives, are not really wanting to learn Java and just need a dirty utility hack, or are starting out in programming just finding your feet. Alan Mellor

In which situations is NoSQL better than relational databases such as SQL? What are specific examples of apps where switching to NoSQL yielded considerable advantages?

Warning: The below answer is a bit oversimplified, for pedagogical purposes. Picking a storage solution for your application is a very complex issue, and every case will be different – this is only meant to give an overview of the main reason why people go NoSQL.

There are several possible reasons that companies go NoSQL, but the most common scenario is probably when one database server is no longer enough to handle your load. noSQL solutions are much more suited to distribute load over shitloads of database servers.

This is because relational databases traditionally deal with load balancing by replication. That means that you have multiple slave databases that watches a master database for changes and replicate them to themselves. Reads are made from the slaves, and writes are made to the master. This works to a certain level, but it has the annoying side-effect that the slaves will always lag slightly behind, so there is a delay between the time of writing and the time that the object is available for reading, which is complex and error-prone to handle in your application. Also, the single master eventually becomes a bottleneck no matter how powerful it is. Plus, it’s a single point of failure.

NoSQL generally deals with this problem by sharding. Overly simplified it means that users with userid 1-1000000 is on server A, and users with userid 1000001-2000000 is on server B and so on. This solves the problems that relational replication has, but the drawback is that features such as aggregate queries (SUM, AVG etc) and traditional transactions are sacrificed.

For some case studies, I believe Couchbase pimps a whitepaper on their web site here: http://www.couchbase.com/why-nosql/use-cases .  Mattias Peter Johansson

Chrome is coded in C++, assembler and Python. How could three different languages ​​be used to obtain only one product? What is the method used to merge programming languages ​​to create software?

Concretely, a processor can correctly receive only one kind of instruction, the assembler. This may also depend on the type of processor.

As the assembler requires several operations just to make a simple addition, we had to create compilers which, starting from a higher level language (easier to write), are able to automatically generate the assembly code.

These compilers can sometimes receive several languages. For example the GCC compiler allows to compile C and C++, and it also supports to receive pieces of assembler inside, defined by a keyword __asm__ . The assembler is still something to avoid absolutely because it is completely dependent on the machine and can therefore be a source of interference and unpleasant surprises.

More generally, we also often create multi-language applications using several components (libraries, or DLLs, activeX, etc.) The interfaces between these components are managed by the operating systems and allow Java to coexist happily. , C, C++, C#, Python, and everything you could wish for. A certain finesse is however necessary in the transitions between languages ​​because each one has its implicit rules which must therefore be enforced very explicitly.

For example, an object coming from the C++ world, transferred by these interfaces in a Java program will have to be explicitly destroyed, the java garbage collector only supports its own objects.

Another practical interface is web services, each module, whatever its technology, can communicate with the others by sending itself serialized objects in json… which is much less a source of errors!  Source:  Vincent Steyer

What is the most dangerous code you have ever seen?

This line removes the filesystem (starting from root /)
  • sudo rm -rf –no-preserve-root /
Or for more fun, a Russian roulette:
  • [ $[ $random % 6 ] == 0 ] && rm -rf –no-preserve-root / || echo *clic* 

(a chance in 6 of falling on the first part described above, otherwise “click” is displayed)

Javascript (or more precisely ECMAScript). And it’s a lot faster than the others. Surprised?

When in 2009 I heard about Node.js, I though that people had lost their mind to use Javascript on the server side. But I had to change my mind.

Node.js is lighting fast. Why? First of all because it is async but with V8, the open source engine of Google Chrome, even the Javascript language itself become incredibly fast. The war of the browsers brought us hyper-optimized Javascript interpreters/compilers.

In intensive computational algorithms, it is more than one order of magnitude faster than PHP (programming language)Ruby, and Python. In fact with V8 (http://code.google.com/p/v8/ ), Javascript became the fastest scripting language on earth.

Does it sound too bold? Look at the benchmarks: http://shootout.alioth.debian.org/

Note: with regular expressions, V8 is even faster than C and C++! Impossible? The reason is that V8 compiles native machine code ad-hoc for the specific regular expressions (see http://blog.chromium.org/2009/02/irregexp-google-chromes-new-regexp.html )

If you are interested, you can learn how to use node: http://www.readwriteweb.com/hack/2011/04/6-free-e-books-on-nodejs.php 🙂

Regarding the language Javascript is not the most elegant language but it is definitely a lot better than what some people may think. The current version of Javascript (or better ECMAScript as specified in ECMA-262 5th edition) is good. If you adopt “use strict”, some strange and unwanted behaviors of the language are eliminated. Harmony, the codename for a future version, is going to be even better and add some extra syntactical sugar similar to some Python’s constructs.

If you want to learn Javascript (not just server side), the best book is Professional Javascript for Web Developers by Nicholas C. Zakas. But if you are cheap, you can still get a lot from http://eloquentjavascript.net/ and http://addyosmani.com/resources/essentialjsdesignpatterns/book/

Does Javascript still sound too archaic? Try Coffeescript (from the same author of Backbone.js) that compiles to Javascript. Coffescript makes cleaner, easier and more concise programming on environments that use Javascript (i.e. the browser and Node.js). It’s a relatively new language that is not perfect yet but it is getting better: http://coffeescript.org/

source: Here

In general, the important advantage of C++ is that it uses computers very efficiently, and offers developers a lot of control over expensive operations like dynamic memory management. Writing in C++ versus Java or python is the difference between spinning up 1,000 cloud instances versus 10,000. The cost savings in electricity alone justifies the cost of hiring specialist programmers and dealing with the difficulties of writing good C++ code. Source

You really need to understand C++ pretty well to have any idea why Rust is the way it is. If you only want to work at Mozilla, learn Rust. Otherwise learn C++ and then switch to Rust if it breaks out and becomes more popular.

Rust is one step forward and two steps back from C++. Embedding the notion of ownership in the language is an obvious improvement over C++. Yay. But Rust doesn’t have exceptions. Instead, it has a bunch of strange little features to provide the RAII’ish behavior that makes C++ really useful. I think on average people don’t know how to teach or how to use exceptions even still. It’s too soon to abandon this feature of C++. Source: Kurt Guntheroth

Java or Javascript-based web applications are the most common. (Yuk!) And, consequently, you’ll be a “dime a dozen” programmer if that’s what you do.

On the hand, (C++ or C) embedded system programming (i.e. hardware-based software), high-capacity backend servers in data centers, internet router software, factory automation/robotics software, and other operating system software are the least common, and consequently the most in demand. Source: Steven Ussery

I want to learn to program. Should I begin with Java or Python?

Your first language doesn’t matter very much. Both Java and Python are common choices. Python is more immediately useful, I would say.

When you are learning to program, you are learning a whole bunch of things simultaneously:

  • How to program
  • How to debug programs that aren’t working
  • How to use programming tools
  • A language
  • How to learn programming languages
  • How to think about programming
  • How to manage your code so you don’t paint yourself into corners, or end up with an unmanageable mess
  • How to read documentation

Beginners often focus too much on their first language. It’s necessary, because you can’t learn any of the others without that, but you can’t learn how to learn languages without learning several… and that means any professional knows a bunch and can pick up more as required. Source: Andrew  McGregor

Absolutely.

If you’re a backend or full-stack engineer, it’s reasonable to focus on your preferred tech, but you’ll be expected to have at least some familiarity with Java, C#, Python, PHP, bash, Docker, HTML/CSS…

And, you need to be good with SQL.

That’s the minimum you should achieve.

The more you know, the more employable — and valuable to your employer or clients — you will be.

Also, languages and platforms are tools. Some tools are more appropriate to some tasks than others.

That means sometimes Node.js is the preferred choice to meet the requirements, and sometimes Java is a better choice — after considering the inevitable trade-offs with every technical decision.  Source: Dave Voohis

Just one?

No, no, that’s not how it works.

To be a competent back-end developer, you need to know at least one of the major, core, back-end programming languages — Java (and its major frameworks, Spring and Hibernate) and/or C# (and its major frameworks, .NET Core and Entity Framework.)

You might want to have passing familiarity with the up-and-coming Go.

You need to know SQL. You can’t even begin to do back-end development without it. But don’t bother learning NoSQL tools until you need to use them.

You should be familiar with the major cloud platforms, AWS and Azure. Others you can pick up if and as needed.

Know Linux, because most back-end infrastructure runs on Linux and you’ll eventually encounter it, even if it’s often hived away into various cloud-based services.

You should know Python and bash scripts. Understand Apache Web Server configuration. Be familiar with Nginx, and if you’re using Java, have some understanding of how Apache Tomcat works.

Understand containerization. Be good with Docker.

Be familiar with JavaScript and HTML/CSS. You might not have to write them, but you’ll need to support front-end devs and work with them and understand what they do. If you do any Node.js (some of us do a lot, some do none), you’ll need to know JavaScript and/or TypeScript and understand Node.

That’ll do for a start.

But even more important than the above, learn computer science.

Learn it, and you’ll learn that programming languages are implementations of fundamental principles that don’t change, whilst programming languages come and go.

Learn those fundamental principles, and it won’t matter what languages are in the market — you’ll be able to pick up any of them as needed and use them productively. Source: Dave Voohis

It sounds like you’re spending too much time studying Python and not enough time writing Python.

The only way to become good at any programming language — and programming in general — is to practice writing code.

It’s like learning to play a musical instrument: Practice is essential.

Try to write simple programs that do simple things. When you get them to work, write more complex programs to do more complex things.

When you get stuck, read documentation, tutorials and other peoples’ code to help you get unstuck.

If you’re still stuck, set aside what you’re stuck on and work on a different program.

But keep writing code. Write a lot of code.

The more code you write, the easier it will become to write more code. Source: Dave Voohis

It depends on what you want to do.

If you want to just mess around with programming as a hobby, it’s fine. In fact, it’s pretty good. Since it’s “batteries included”, you can often get a lot done in just a few lines of code. Learn Python 3, not 2.

If you want to be a professional software engineer, Python’s a poor place to start. It’s syntax isn’t terrible, but it’s weird. It’s take on OO is different from almost all other OO languages. It’ll teach you bad habits that you’ll have to unlearn when switching to another language.

If you want to eventually be a professional software engineer, learn another OO language first. I prefer C#, but Java’s a great choice too. If you don’t care about OO, C is a great choice. Nearly all major languages inherited their syntax from C, so most other languages will look familiar if you start there.

C++ is a stretch these days. Learn another OO language first. You’ll probably eventually have to learn JavaScript, but don’t start there. It… just don’t.

So, ya. If you just want to do some hobby coding and write some short scripts and utilities, Python’s fine. If you want to eventually be a pro SE, look elsewhere. Source: Chris Nash

You master a language by using it, not just reading about it and memorizing trivia. You’ll pick up and internalize plenty of trivia anyway while getting real world work done.

Reading books and blogs and whatnot helps, but those are more meaningful if you have real world problems to apply the material to. Otherwise, much of it is likely to go into your eyeballs and ooze right back out of your ears, metaphorically speaking.

I usually don’t dig into all the low level details when reading a programming book, unless it’s specifically needed for a problem I am trying to solve. Or, it caught my curiosity, in which case, satisfying my curiosity is the problem I am trying to solve.

Once you learn the basics, use books and other resources to accelerate you on your journey. What to read, and when, will largely be driven by what you decide to work on.

Bjarne Stroustrup, the creator of C++, has this to say:

And no, I’m not a walking C++ dictionary. I do not keep every technical detail in my head at all times. If I did that, I would be a much poorer programmer. I do keep the main points straight in my head most of the time, and I do know where to find the details when I need them.

Source: Joe Zbiciak

Scale. There is no field other than software where a company can have 2 billion customers, and do it with only a few tens of thousands of employees. The only others that come close are petroleum and banking – both of which are also very highly paid. By David Seidman

Professional programmer’s code:

  • //Here we address strange issue that was seen on 
  • //production a few times, but is not reproduced  
  • //localy. User can be mysteriously logged out after 
  • //clicking Back button. This seems related to recent 
  • //changes to redirect scheme upon order confirmation. 
  • login(currentUser()); 

Average programmer’s code:

  • //Hotfix – don’t ask 
  • login(currentUser()); 

Professional programmer’s commit message:

  • Fix memory leak in connection pool 
 
  • We’ve seen connections leaking from the pool 
  • if any query had already been executed through 
  • it and then exception is thrown. 
  •  
  • The root causes was found in ConnectionPool.addExceptionHook() 
  • method that ignored certain types of exceptions. 

Average programmer’s commit message:

  • Small fix 

Professional programmer’s test naming:

  • login_shouldThrowUserNotFoundException_ifUserAbsentInDB() 
  • login_shouldSetCurrentUser_ifLoginSuccessfull() 
  • login_shouldRecordAuditMessage_uponUnsuccessfullLogin() 

Average programmer’s test naming:

  • testLogin1() 
  • testLogin2() 
  • testLogin3() 

After first few years of programming, when the urge to put some cool looking construct only you can understand into every block of code wears off, you’ll likely come to the conclusion that these examples are actually the code you want to encounter when opening a new project.

If we look at the apps written by good vs average programmers (not talking about total beginners) the code itself is not that much different, but if small conveniences everywhere allow you to avoid frustration while reading it – it is likely written by a professional.

The only valid measurement of code quality is the WTFs/minutes.

Here are 5 very common ones. If you don’t know these then you’re probably not ready.

  1. Graph Search – Depth-first and Breadth-first search
  2. Binary Search
  3. Backtracking using Recursion and Memoization
  4. Searching a Binary Search Tree
  5. Recursion over a Binary Tree

Of course, there are many others too.

Another thing to keep in mind – you won’t be asked these directly. It will be disguised as a unique situation.

source: quora

I worked as an academic in physics for about 10 years, and used Fortran for much of that time. I had to learn Fortran for the job, as I was already fluent in C/C++.

The prevalence of Fortran in computational physics comes down to three factors:

  1. Performance. Yes, Fortran code is typically faster than C/C++ code. One of the main reasons for this is that Fortran compilers are heavily optimised towards making fast code, and the Fortran language spec is designed such that compilers will know what to optimise. It’s possible to make your C program as fast as a Fortran one, but it’s considerably more work to do so.
  2. Convenience. Imagine you want to add a scalar to an array of values – this is the sort of thing we do all the time in physics. In C you’d either need to rely on an external library, or you’d need to write a function for this (leading to verbose code). In Fortran you just add them together, and the scalar is broadcasted across all elements of the array. You can do the same with multiplication and addition of two arrays as well. Fortran was originally the Formula-translator, and therefore makes math operations easy.
  3. Legacy. When you start a PhD, you’re often given some ex-post-doc’s (or professor’s) code as a starting point. Often times this code will be in Fortran (either because of the age of the person, or because they were given Fortran code). Unfortunately sometimes this code is F77, which means that we still have people in their 20s learning F77 (which I think is just wrong these days, as it gives Fortran as a whole a bad name). Source: Erlend Davidson

My friend, if you like C, you are gonna looooove B. B was C’s predecessor language. It’s a lot like C, but for C, Thompson and Ritchie added in data types. Basically, C is for lazy programmers. The only data type in B was determined by the size of a word on the host system. B was for “real-men programmers” who ate Hollerith cards for extra fiber, chewed iron into memory cores when they ran out of RAM, and dreamed in hexadecimal. Variables are evaluated contextually in B, and it doesn’t matter what the hell they contain; they are treated as though they hold integers in integer operations, and as though they hold memory addresses in pointer operations. Basically, B has all of the terseness of an assembly language, without all of the useful tooling that comes along with assembly.

As others indicate, pointers do not hold memory; they hold memory addresses. They are typed because before you go to that memory address, you probably want to know what’s there. Among other issues, how big is “there”? Should you read eight bits? Sixteen? Thirty-two? More? Inquiring minds want to know! Of course, it would also be nice to know whether the element at that address is an individual element or one element in an array, but C is for “slightly real less real men programmers” than B. Java does fully differentiate between scalars and arrays, and therefore is clearly for the weak minded. /jk Source: Joshua Gross

Hidden Features of C#

What are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know?

Here are the revealed features so far:

Keywords

Attributes

Syntax

Language Features

Visual Studio Features

Framework

Methods and Properties

  • String.IsNullOrEmpty() method by KiwiBastard
  • List.ForEach() method by KiwiBastard
  • BeginInvoke()EndInvoke() methods by Will Dean
  • Nullable<T>.HasValue and Nullable<T>.Value properties by Rismo
  • GetValueOrDefault method by John Sheehan

Tips & Tricks

  • Nice method for event handlers by Andreas H.R. Nilsson
  • Uppercase comparisons by John
  • Access anonymous types without reflection by dp
  • A quick way to lazily instantiate collection properties by Will
  • JavaScript-like anonymous inline-functions by roosteronacid

Other

  • netmodules by kokos
  • LINQBridge by Duncan Smart
  • Parallel Extensions by Joel Coehoorn
  • This isn’t C# per se, but I haven’t seen anyone who really uses System.IO.Path.Combine() to the extent that they should. In fact, the whole Path class is really useful, but no one uses it!
  • lambdas and type inference are underrated. Lambdas can have multiple statements and they double as a compatible delegate object automatically (just make sure the signature match) as in:
Console.CancelKeyPress +=
    (sender, e) => {
        Console.WriteLine("CTRL+C detected!\n");
        e.Cancel = true;
    };
  • From Rick Strahl: You can chain the ?? operator so that you can do a bunch of null comparisons.
string result = value1 ?? value2 ?? value3 ?? String.Empty;

When normalizing strings, it is highly recommended that you use ToUpperInvariant instead of ToLowerInvariant because Microsoft has optimized the code for performing uppercase comparisons.

I remember one time my coworker always changed strings to uppercase before comparing. I’ve always wondered why he does that because I feel it’s more “natural” to convert to lowercase first. After reading the book now I know why.

  • My favorite trick is using the null coalesce operator and parentheses to automagically instantiate collections for me.
private IList<Foo> _foo;

public IList<Foo> ListOfFoo 
    { get { return _foo ?? (_foo = new List<Foo>()); } }
  • Here are some interesting hidden C# features, in the form of undocumented C# keywords:
__makeref

__reftype

__refvalue

__arglist

These are undocumented C# keywords (even Visual Studio recognizes them!) that were added to for a more efficient boxing/unboxing prior to generics. They work in coordination with the System.TypedReference struct.

There’s also __arglist, which is used for variable length parameter lists.

One thing folks don’t know much about is System.WeakReference — a very useful class that keeps track of an object but still allows the garbage collector to collect it.

The most useful “hidden” feature would be the yield return keyword. It’s not really hidden, but a lot of folks don’t know about it. LINQ is built atop this; it allows for delay-executed queries by generating a state machine under the hood. Raymond Chen recently posted about the internal, gritty details.

  • Using @ for variable names that are keywords.
var @object = new object();
var @string = "";
var @if = IpsoFacto();
  • If you want to exit your program without calling any finally blocks or finalizers use FailFast:
Environment.FailFast()

Read more hidden C# Features at Hidden Features of C#? – Stack Overflow

Hidden Features of python

Source: stackoveflow

What IDE to Use for Python

Programming, Coding and Algorithms Questions and Answers

Acronyms used:

 L  - Linux
 W  - Windows
 M  - Mac
 C  - Commercial
 F  - Free
 CF - Commercial with Free limited edition
 ?  - To be confirmed

What is The right JSON content type?

For JSON text:

application/json

Example: { "Name": "Foo", "Id": 1234, "Rank": 7 }

For JSONP (runnable JavaScript) with callback:

application/javascript
Example: functionCall({"Name": "Foo", "Id": 1234, "Rank": 7});

Here are some blog posts that were mentioned in the relevant comments:

IANA has registered the official MIME Type for JSON as application/json.

When asked about why not text/json, Crockford seems to have said JSON is not really JavaScript nor text and also IANA was more likely to hand out application/* than text/*.

More resources:

JSON (JavaScript Object Notation) and JSONP (“JSON with padding”) formats seems to be very similar and therefore it might be very confusing which MIME type they should be using. Even though the formats are similar, there are some subtle differences between them.

So whenever in any doubts, I have a very simple approach (which works perfectly fine in most cases), namely, go and check corresponding RFC document.

JSON RFC 4627 (The application/json Media Type for JavaScript Object Notation (JSON)) is a specifications of JSON format. It says in section 6, that the MIME media type for JSON text is

application/json.

JSONP JSONP (“JSON with padding”) is handled different way than JSON, in a browser. JSONP is treated as a regular JavaScript script and therefore it should use application/javascript, the current official MIME type for JavaScript. In many cases, however, text/javascript MIME type will work fine too.

Note that text/javascript has been marked as obsolete by RFC 4329 (Scripting Media Types) document and it is recommended to use application/javascript type instead. However, due to legacy reasons, text/javascript is still widely used and it has cross-browser support (which is not always a case with application/javascript MIME type, especially with older browsers).

What are some mistakes to avoid while learning programming?

  1. Over use of the GOTO statement. Most schools teach this is a NO;NO
  2. Not commenting your code with proper documentation – what exactly does the code do??
  3. Endless LOOP. A structured loop that has NO EXIT point
  4. Overwriting memory – destroying data and/or code. Especially with Dynamic Allocation;Stacks;Queues
  5. Not following discipline – Requirements, Design, Code, Test, Implementation

Moreover complex code should have a BLUEPRINT – Design. That is like saying let’s build a house without a floor plan. Code/Programs that have a requirements and design specification BEFORE writing code tends to have a LOWER error rate. Less time debugging and fixing errors. Source: QUora

Lisp.

The thing that always struck me is that the best programmers I would meet or read all had a couple of things in common.

  1. They didn’t use IDEs, preferring Emacs or Vim.
  2. They all learned or used Functional Programming (Lisp, Haskel, Ocaml)
  3. They all wrote or endorsed some kind of testing, even if it’s just minimal TDD.
  4. They avoided fads and dependencies like a plague.

It is a basic truth that learning Lisp, or any functional programming, will fundamentally change the way you program and think about programming. Source: Quora

The two work well together. Both are effective at what they do:

  • Pairing is a continuous code review, with a human-powered ‘auto suggest’. If you like github copilot, pairing does that with a real brain behind it.
  • TDD forces you to think about how your code will be used early on in the process. That gives you the chance to code things so they are clear and easy to use

Both of these are ‘shift-left’ activities. In the days of old, code review and testing happened after the code was written. Design happened up front, but separate to coding, so you never got to see if the design was actually codeable properly. By shifting these activities to before the code gets written, we get a much faster feedback loop. That enables us to make corrections and improvements as we go.

Neither is better than each other. They target different parts of the coding challenge. By Alan Mellor

Yes, I’ve found that three can be very helpful, especially these days.

  • Monitor 1: IDE full screen
  • Monitor 2: Google, JIRA ticket, documentation. Manual Test tools
  • Monitor 3: Zoom/Teams/Slack/Outlook for general comms

That third monitor becomes almost essential if you are remote pairing, and wnat to see your collaborator n real-time.

My current work is teaching groups in our academy. That also benefits from three monitors: Presenter view, participant view, zoom for chat and hands ups in the group.

I can get away with two monitors. I can even do it with a £3 HDMI fake monitor USB plug. Neither is quite as effective. Source: Alan Mellor

You make the properties not different. And the key way to do that is by removing the properties completely.

Instead, you tell your objects to do some behaviour.

Say we have three classes full of different data that all needs adding to some report. Make an interface like this:

  • interface IReportSource { 
  • void includeIn( Report r ); 

so here, all your classes with different data will implement this interface. We can call the method ‘includeIn’ on each of them. We pass in a concrete class Report to that method. This will be the report that is being generated.

Then your first class which used to look like

  • class ALoadOfData { 
  • get; set; name 
  • get; set; quantity 

(forgive the rusty/pseudo C# syntax please)

can be translated into:

  • class ARealObject : IReportSource { 
  • private string name ; 
  • private int quantity ; 
  •  
  • void includeIn( Report r ) { 
  • r.addBasicItem( name, quantity ); 

You can see how the properties are no longer exposed. They remain encapsulated in the object, available for use inside our includeIn() method. That is now polymorphic, and you would write a custom includeIn() for each kind of class implementing IReportSource. It can then call a suitable method on the Report class, with a suitable number of properties (now hidden; so just fields). By Alan Mellor

What are the Top 20  lesser known but cool data structures?

1- Tries, also known as prefix-trees or crit-bit trees, have existed for over 40 years but are still relatively unknown. A very cool use of tries is described in “TRASH – A dynamic LC-trie and hash data structure“, which combines a trie with a hash function.

2- Bloom filter: Bit array of m bits, initially all set to 0.

To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.

To check if an item is in the set, compute the k indices and check if they are all set to 1.

Of course, this gives some probability of false-positives (according to wikipedia it’s about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.

Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.

3- Rope: It’s a string that allows for cheap prepends, substrings, middle insertions and appends. I’ve really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.

4- Skip lists are pretty neat.

Wikipedia
A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).

They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.

If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT’s Introduction to Algorithms lecture on them.

Also, here is a Java applet demonstrating Skip Lists visually.

5Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.

Bit Arrays store individual bits compactly and allow fast bit operations.

6-Zippers – derivatives of data structures that modify the structure to have a natural notion of ‘cursor’ — current location. These are really useful as they guarantee indicies cannot be out of bound — used, e.g. in the xmonad window manager to track which window has focused.

Amazingly, you can derive them by applying techniques from calculus to the type of the original data structure!

7- Suffix tries. Useful for almost all kinds of string searching (http://en.wikipedia.org/wiki/Suffix_trie#Functionality). See also suffix arrays; they’re not quite as fast as suffix trees, but a whole lot smaller.

8- Splay trees (as mentioned above). The reason they are cool is threefold:

    • They are small: you only need the left and right pointers like you do in any binary tree (no node-color or size information needs to be stored)
    • They are (comparatively) very easy to implement
    • They offer optimal amortized complexity for a whole host of “measurement criteria” (log n lookup time being the one everybody knows). See http://en.wikipedia.org/wiki/Splay_tree#Performance_theorems

9- Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it’s a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it’s not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.

10- A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here’s how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v’s neighbor list and for v in u’s neighbor list. Both have size at most 6, so this is O(1).

By the above algorithm, if u and v are neighbors, you won’t have both u in v’s list and v in u’s list. If you need this, just add each node’s missing neighbors to that node’s neighbor list, but store how much of the neighbor list you need to look through for fast lookup.

11-Lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked.
They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.

Here’s some links
http://www.cl.cam.ac.uk/research/srg/netos/lock-free/
http://www.research.ibm.com/people/m/michael/podc-1996.pdf [Links to PDF]
http://www.boyet.com/Articles/LockfreeStack.html

Mike Acton’s (often provocative) blog has some excellent articles on lock-free design and approaches

12- I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan’s Function, if I recall my data structures class correctly).

13- Fibonacci heaps

They’re used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra’s algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.

14- Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it’s the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.

Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.

In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.

Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.

15- Huffman trees – used for compression.

16- Have a look at Finger Trees, especially if you’re a fan of the previously mentioned purely functional data structures. They’re a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.

As per the original article:

Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.

A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.

17- Circular or ring buffer– used for streaming, among other things.

18- I’m surprised no one has mentioned Merkle trees (ie. Hash Trees).

Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.

19- <zvrba> Van Emde-Boas trees

I think it’d be useful to know why they’re cool. In general, the question “why” is the most important to ask 😉

My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.

20- An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.

Honorable mentions: splay trees, Cuckoo Hashing, min-max heap,  Cache Oblivious datastructures, Left Leaning Red-Black Trees, Work Stealing Queue, Bootstrapped skew-binomial heaps , Kd-Trees, MX-CIF Quadtrees, HAMT, Inverted Index, Fenwick Tree, Ball Tress, Van Emde-Boas trees. Nested sets , half-edge data structure , Scapegoat trees, unrolled linked list, 2-3 Finger Trees, Pairing heaps , Interval Trees, XOR Linked List, Binary decision diagram, The Region Quadtree, treaps, Counted unsorted balanced btrees, Arne Andersson trees , DAWGs , BK-Trees, or Burkhard-Keller TreesZobrist Hashing, Persistent Data Structures, B* tree, Deletable Bloom Filters (DlBF)

Ring-Buffer, Skip lists, Priority deque, Ternary Search Tree, FM-index, PQ-Trees, sparse matrix data structures, Delta list/delta queue, Bucket Brigade, Burrows–Wheeler transform , corner-stitched data structure. Disjoint Set Forests, Binomial heap, Cycle Sort 

Variable names in languages like Python are not bound to storage locations until run time. That means you have to look up each name to find out what storage it is bound to and what its type is before you can apply an operation like “+” to it. In C++, names are bound to storage at compile time, so no lookup is needed, and the type is fixed at compile time so the compiler can generate machine code with no overhead for interpretation. Late-bound languages will never be as fast as languages bound at compile time.

You could make a language that looks kinda like Python that is compile-time bound and statically typed. You could incrementally compile such a language. But you can also build an environment that incrementally compiles C++ so it would feel a lot like using Python. Try godbolt or tutorialspoint if you want to see this actually working for small programs. 

Source: quora

Have I got good news for you! No one has ever asked me my IQ, nor have I ever asked anyone for their IQ. This was true when I was a software engineer, and is true now that I’m a computer scientist.

Try to learn to program. If you can learn in an appropriate environment (a class with a good instructor), go from there. If you fail the first time, adjust your learning approach and try again. If you still can’t, find another future; you probably wouldn’t like computer programming, anyway. If you learn later, that’s fine. 

Source: Here

Beginners to C++ will consistently struggle with getting a C++ program off the ground. Even “Hello World” can be a challenge. Making a GUI in C++ from scratch? Almost impossible in the beginning.

These 4 areas cannot be learned by any beginner to C++ in 1 day or even 1 month in most cases. These areas challenge nearly all beginners and I have seen cases where it can take a few months to teach.

These are the most fundamental things you need to be able to do to build and produce a program in C++.

Basic Challenge #1: Creating a Program File

  1. Compiling and linking, even in an IDE.
  2. Project settings in an IDE for C++ projects.
  3. Make files, scripts, environment variables affecting compilation.

Basic Challenge #2: Using Other People’s C++ Code

  1. Going outside the STL and using libraries.
  2. Proper library paths in source, file path during compile.
  3. Static versus dynamic libraries during linking.
  4. Symbol reference resolution.

Basic Challenge #3: Troubleshooting Code

  1. Deciphering compiler error messages.
  2. Deciphering linker error messages.
  3. Resolving segmentation faults.

Basic Challenge #4: Actual C++ Code

  1. Writing excellent if/loop/case/assign/call statements.
  2. Managing header/implementation files consistently.
  3. Rigorously avoiding name collisions while staying productive.
  4. Various forms of function callback, especially in GUIs.

How do you explain them?

You cannot explain any of them in a way that most persons will pick up right away. You can describe these things by way of analogy, you can even have learners mirror you at the same time you demonstrate them. I’ve done similar things with trainees in a work setting. In the end, it usually requires time on the order of months and years to pick up these things.

More at C++ the Basic Way – UI and Command-Line

As a professional compiler writer and a student of computers languages and computer architecture this question needs a deeper analysis.

I would proposed the following taxonomy:

1. Assembly code,

2. Implementation languages,

3. Low Level languages and

4. High Level Languages.

Assembly code is where one-for-one translation between source and code.

Macro processors were invented to improve productivity. But to debug a one-for-one listing is needed. The next questions is “What is the hardest Assembly code?” I would vote for the x86–32. It is a very byzantine architecture with a number of mistakes and miss steps. Fortunately the x86–64 cleans up many of these errors.

Implementation languages are languages that are architecture specific but allow a more statement like expression.

There is no “semantic gap” between Assembly code and the machine. Bliss, PL360, and at the first versions of C were in this category. They required the same understanding of the machine as assembly without the pain of assembly. These are hard languages. The semantic gap is only one of syntax.

Next are the Low Level Languages.

Modern “c” firmly fits here. These are languages who’s design was molded about the limitations of computer architecture. FORTRAN, C, Pascal, and Basic are archetypes of these languages. These are easier to learn and use than Assembly and Implementation language. They all have a “Run Time Library” that maintain an execution environment.

As a note, LISP has some syntax, CAR and CDR, which are left over from the IBM 704 it was first implemented on.

Last are the “High Level Languages”.

Languages that require extensive runtime environment to support. Except for Algol, require a “garbage collector” for efficient memory support. The languages are: Algol, SNOBOL4, LISP (and it variants), Java, Smalltalk, Python, Ruby, and Prolog.

Which of these is hardest? I would vote for Prolog with LISP being second. Why? The logical process of “Resolution” has taken me some time learn. Mastery is a long ways away. It is harder than Assembly code? Yes and no. I would never attempt a problem I use Prolog for in Assembly. The order of effort is too big. I find I spend hours writing 20 lines of Prolog which replaces hundreds of lines of SNOBOL4. LISP can be hard unless you have intelligent editors and other tools. In one sense LISP is an “assembly language for an AI machine” and Prolog is “assembly language for a logic machine.” Both Prolog and LISP are very powerful languages. I find it takes deep mental effort to write code in both. But code does wonderful things!

What and where are the stack and the heap?

  • Where and what are they (physically in a real computer’s memory)?
  • To what extent are they controlled by the OS or language run-time?
  • What is their scope?
  • What determines the size of each of them?
  • What makes one faster?

The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.

The heap is memory set aside for dynamic allocation. Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.

Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).

To answer your questions directly:

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

What is their scope?

The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.

What determines the size of each of them?

The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.

A clear demonstration: 
Image source: vikashazrati.wordpress.com

Stack:

  • Stored in computer RAM just like the heap.
  • Variables created on the stack will go out of scope and are automatically deallocated.
  • Much faster to allocate in comparison to variables on the heap.
  • Implemented with an actual stack data structure.
  • Stores local data, return addresses, used for parameter passing.
  • Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
  • Data created on the stack can be used without pointers.
  • You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
  • Usually has a maximum size already determined when your program starts.

Heap:

  • Stored in computer RAM just like the stack.
  • In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with deletedelete[], or free.
  • Slower to allocate in comparison to variables on the stack.
  • Used on demand to allocate a block of data for use by the program.
  • Can have fragmentation when there are a lot of allocations and deallocations.
  • In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
  • Can have allocation failures if too big of a buffer is requested to be allocated.
  • You would use the heap if you don’t know exactly how much data you will need at run time or if you need to allocate a lot of data.
  • Responsible for memory leaks.

Example:

int foo()
{
  char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
  bool b = true; // Allocated on the stack.
  if(b)
  {
    //Create 500 bytes on the stack
    char buffer[500];

    //Create 500 bytes on the heap
    pBuffer = new char[500];

   }//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;

he most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.

  • In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).

    Stack like a stack of papers

    The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.

  • In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear ‘top’ item.

    Heap like a heap of licorice allsorts

    Heap allocation requires maintaining a full record of what memory is allocated and what isn’t, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting – identifying at runtime when memory is no longer in scope and deallocating it.

These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!

  • To what extent are they controlled by the OS or language runtime?

    As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn’t really have runtime control over it; it’s determined by the programming language, OS and even the system architecture.

    A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.

  • What is their scope?

    The call stack is such a low level concept that it doesn’t relate to ‘scope’ in the sense of programming. If you disassemble some code you’ll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you’d expect it to work given how your programming languages work. In a heap, it’s also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a “scope” is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).

  • What determines the size of each of them?

    Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don’t store huge chunks of data on the stack, so it’ll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, “stack overflow”) or other unusual programming decisions.

    A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don’t normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn’t use memory that you haven’t allocated yet or memory that you have freed.

  • What makes one faster?

    The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What’s more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.

  • Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
  • In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.

The heap

  • The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block.
  • As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
  • Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
  • When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.

The heap

The stack

  • The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
  • The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don’t be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
  • If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
  • When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
  • When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
  • Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
  • As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.

The stack

Can a function be allocated on the heap instead of a stack?

No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.

How the heap is managed is really up to the runtime environment. C uses malloc and C++ uses new, but many other languages have garbage collection.

However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn’t too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.

In the following C# code

public void Method1()
{
    int i = 4;
    int y = 2;
    class1 cls1 = new class1();
}

Here’s how the memory is managed

Picture of variables on the stack

Local Variables that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don’t really know up front but we expect them to last a while. In most languages it’s critical that we know at compile time how large a variable is if we want to store it on the stack.

Objects (which vary in size as we update them) go on the heap because we don’t know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.

In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you’re not dealing with pointers.

More information can be found here:

The difference between stack and heap memory allocation « timmurphy.org

and here:

Creating Objects on the Stack and Heap

This article is the source of picture above: Six important .NET concepts: Stack, heap, value types, reference types, boxing, and unboxing – CodeProject

but be aware it may contain some inaccuracies.

The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.

Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).

The Heap The heap is a generic name for where you put the data that you create on the fly. If you don’t know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.

Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are – memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).

Implementation Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.

This is only practical if your memory usage is quite different from the norm – i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.

Physical location in memory This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implementation specific) and frankly not important.

In Short

A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer’s RAM.


In Detail

The Stack

The stack is a “LIFO” (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is “pushed” onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.

The advantage of using the stack to store variables, is that memory is managed for you. You don’t have to allocate memory by hand, or free it once you don’t need it any more. What’s more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.

More can be found here.


The Heap

The heap is a region of your computer’s memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don’t need it any more.

If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won’t be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.

Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.

Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.

More can be found here.


Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.

Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.

Enter image description here

You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.

In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.

Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).

Enter image description here

At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.

Even, more detail is given here and here.


Now come to your question’s answers.

To what extent are they controlled by the OS or language runtime?

The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.

More can be found here.

What is their scope?

Already given in top.

“You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.”

More can be found in here.

What determines the size of each of them?

The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).

What makes one faster?

Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.

Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.

Details can be found from here.

How do you stop scripters from slamming your website hundreds of times a second?

How about implementing something like SO does with the CAPTCHAs?

If you’re using the site normally, you’ll probably never see one. If you happen to reload the same page too often, post successive comments too quickly, or something else that triggers an alarm, make them prove they’re human. In your case, this would probably be constant reloads of the same page, following every link on a page quickly, or filling in an order form too fast to be human.

If they fail the check x times in a row (say, 2 or 3), give that IP a timeout or other such measure. Then at the end of the timeout, dump them back to the check again.


Since you have unregistered users accessing the site, you do have only IPs to go on. You can issue sessions to each browser and track that way if you wish. And, of course, throw up a human-check if too many sessions are being (re-)created in succession (in case a bot keeps deleting the cookie).

As far as catching too many innocents, you can put up a disclaimer on the human-check page: “This page may also appear if too many anonymous users are viewing our site from the same location. We encourage you to register or login to avoid this.” (Adjust the wording appropriately.)

Besides, what are the odds that X people are loading the same page(s) at the same time from one IP? If they’re high, maybe you need a different trigger mechanism for your bot alarm.


Edit: Another option is if they fail too many times, and you’re confident about the product’s demand, to block them and make them personally CALL you to remove the block.

Having people call does seem like an asinine measure, but it makes sure there’s a human somewhere behind the computer. The key is to have the block only be in place for a condition which should almost never happen unless it’s a bot (e.g. fail the check multiple times in a row). Then it FORCES human interaction – to pick up the phone.

In response to the comment of having them call me, there’s obviously that tradeoff here. Are you worried enough about ensuring your users are human to accept a couple phone calls when they go on sale? If I were so concerned about a product getting to human users, I’d have to make this decision, perhaps sacrificing a (small) bit of my time in the process.

Since it seems like you’re determined to not let bots get the upper hand/slam your site, I believe the phone may be a good option. Since I don’t make a profit off your product, I have no interest in receiving these calls. Were you to share some of that profit, however, I may become interested. As this is your product, you have to decide how much you care and implement accordingly.


The other ways of releasing the block just aren’t as effective: a timeout (but they’d get to slam your site again after, rinse-repeat), a long timeout (if it was really a human trying to buy your product, they’d be SOL and punished for failing the check), email (easily done by bots), fax (same), or snail mail (takes too long).

You could, of course, instead have the timeout period increase per IP for each time they get a timeout. Just make sure you’re not punishing true humans inadvertently.

The unsatisfying answer: Nearly every C++ compiler can output assembly language,* so assembly language can be exactly the same speed as C++ if you use C++ to develop the assembly code.

The more interesting answer: It’s highly unlikely that an application written entirely in assembly language remains faster than the same application written in C++ over the long run, even in the unlikely case it starts out faster.

Repeat after me: Assembly Language Isn’t Magic™.

For the nitty gritty details, I’ll just point you to some previous answers I’ve written, as well as some related questions, and at the end, an excellent answer from Christopher Clark:

Performance optimization strategies as a last resort

Let’s assume:

  • the code already is working correctly
  • the algorithms chosen are already optimal for the circumstances of the problem
  • the code has been measured, and the offending routines have been isolated
  • all attempts to optimize will also be measured to ensure they do not make matters worse

OK, you’re defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:

  • The first problem found was use of list clusters (now called “iterators” and “container classes”) accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds.

  • Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds.

  • Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.

Now I seem to have hit a wall. The samples are telling me exactly what it is doing, but I can’t seem to find anything that I can improve. Then I reflect on the basic design of the program, on its transaction-driven structure, and ask if all the list-searching that it is doing is actually mandated by the requirements of the problem.

Then I hit upon a re-design, where the program code is actually generated (via preprocessor macros) from a smaller set of source, and in which the program is not constantly figuring out things that the programmer knows are fairly predictable. In other words, don’t “interpret” the sequence of things to do, “compile” it.

  • That redesign is done, shrinking the source code by a factor of 4, and the time is reduced to 10 seconds.

Now, because it’s getting so quick, it’s hard to sample, so I give it 10 times as much work to do, but the following times are based on the original workload.

  • More diagnosis reveals that it is spending time in queue-management. In-lining these reduces the time to 7 seconds.

  • Now a big time-taker is the diagnostic printing I had been doing. Flush that – 4 seconds.

  • Now the biggest time-takers are calls to malloc and free. Recycle objects – 2.6 seconds.

  • Continuing to sample, I still find operations that are not strictly necessary – 1.1 seconds.

Total speedup factor: 43.6

Now no two programs are alike, but in non-toy software I’ve always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i or i++ or for(;;) or while(1) are faster: the kinds of questions I see so often on Stack Overflow.

P.S. It may be wondered why I didn’t use a profiler. The answer is that almost every one of these “problems” was a function call site, which stack samples pinpoint. Profilers, even today, are just barely coming around to the idea that statements and call instructions are more important to locate, and easier to fix, than whole functions.

I actually built a profiler to do this, but for a real down-and-dirty intimacy with what the code is doing, there’s no substitute for getting your fingers right in it. It is not an issue that the number of samples is small, because none of the problems being found are so tiny that they are easily missed.

ADDED: jerryjvl requested some examples. Here is the first problem. It consists of a small number of separate lines of code, together taking over half the time:

 /* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)

These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with “information hiding” meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a “bottleneck” (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they “should have been avoided” (in hindsight). I hope that gives a bit of the flavor.

Here is the second problem, in two separate lines:

 /* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)

These are building lists by appending items to their ends. (The fix was to collect the items in arrays, and build the lists all at once.) The interesting thing is that these statements only cost (i.e. were on the call stack) 3/48 of the original time, so they were not in fact a big problem at the beginning. However, after removing the first problem, they cost 3/20 of the time and so were now a “bigger fish”. In general, that’s how it goes.

I might add that this project was distilled from a real project I helped on. In that project, the performance problems were far more dramatic (as were the speedups), such as calling a database-access routine within an inner loop to see if a task was finished.

REFERENCE ADDED: The source code, both original and redesigned, can be found in www.ddj.com, for 1993, in file 9311.zip, files slug.asc and slug.zip.

EDIT 2011/11/26: There is now a SourceForge project containing source code in Visual C++ and a blow-by-blow description of how it was tuned. It only goes through the first half of the scenario described above, and it doesn’t follow exactly the same sequence, but still gets a 2-3 order of magnitude speedup.

Suggestions:

  • Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead.
    Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory.
  • Don’t use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it.
    Down-sides: writing additional code means more surface area for bugs.
  • Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!)
  • Cheat: in some cases although an exact calculation may exist for your problem, you may not need ‘exact’, sometimes an approximation may be ‘good enough’ and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%?
    Down-sides: Well… the answer won’t be exact.

When you can’t improve the performance any more – see if you can improve the perceived performance instead.

You may not be able to make your fooCalc algorithm faster, but often there are ways to make your application seem more responsive to the user.

A few examples:

  • anticipating what the user is going to request and start working on that before then
  • displaying results as they come in, instead of all at once at the end
  • Accurate progress meter

These won’t make your program faster, but it might make your users happier with the speed you have.

I spend most of my life in just this place. The broad strokes are to run your profiler and get it to record:

  • Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
  • Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing.
  • Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
  • Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
  • Sequential floating-point ops. Make these SIMD.

And one more thing I like to do:

  • Set your compiler to output assembly listings and look at what it emits for the hotspot functions in your code. All those clever optimizations that “a good compiler should be able to do for you automatically”? Chances are your actual compiler doesn’t do them. I’ve seen GCC emit truly WTF code.

More suggestions:

  • Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need.

  • Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking).

  • Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done.

  • Threaded I/O: For those daring enough, combine ‘I/O up-front’ or ‘Delay I/O’ with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.

I love all the

  1. graph algorithms in particular the Bellman Ford Algorithm
  2. Scheduling algorithms the Round-Robin scheduling algorithm in particular.
  3. Dynamic Programming algorithms the Knapsack fractional algorithm in particular.
  4. Backtracking algorithms the 8-Queens algorithm in particular.
  5. Greedy algorithms the Knapsack 0/1 algorithm in particular.

We use all these algorithms in our daily life in various forms at various places.

For example every shopkeeper applies anyone or more of the several scheduling algorithms to service his customers. Depending upon his service policy and situation. No one of the scheduling algorithm fits all the situations.

All of us mentally apply one of the graph algorithms when we plan the shortest route to be taken when we go out for doing multiple things in one trip.

All of us apply one of the Greedy algorithms while selecting career, job, girlfriends, friends etc.

All of us apply one of the Dynamic programming algorithms when we do simple multiplication mentally by referring to the various mathematical products table in our memory.

How much faster is C compared to Python?

Top 7 Most Popular Programming Languages (Most Used High Level List)

It uses TimSort, a sort algorithm which was invented by Tim Peters, and is now used in other languages such as Java.

TimSort is a complex algorithm which uses the best of many other algorithms, and has the advantage of being stable – in others words if two elements A & B are in the order A then B before the sort algorithm and those elements test equal during the sort, then the algorithm Guarantees that the result will maintain that A then B ordering.

That does mean for example if you want to say order a set of student scores by score and then name (so equal scores are ordered already alphabetically) then you can sort by name and then sort by score.

TimSort has good performance against data sets which are partially sorted or already sorted (areas where some other algorithms struggle).

 
 
Timsort – Wikipedia
Timsort was designed to take advantage of runs of consecutive ordered elements that already exist in most real-world data, natural runs . It iterates over the data collecting elements into runs and simultaneously putting those runs in a stack. Whenever the runs on the top of the stack match a merge criterion , they are merged. This goes on until all data is traversed; then, all runs are merged two at a time and only one sorted run remains. 

Run Your Python Code Online Here



I’m currently coding a SAT solver algorithm that will have to take millions of input data, and I was wondering if I should switch from Python to C.

Answer: Using best-of-class equivalent algorithms optimized compiled C code is often multiple orders of magnitude faster than Python code interpreted by CPython (the main Python implementation). Other Python implementations (like PyPy) might be a bit better, but not vastly so. Some computations fit Python better, but I have a feeling that a SAT solver implementation will not be competitive if written using Python.

All that said, do you need to write a new implementation? Could you use one of the excellent ones out there? CDCL implementations often do a good job, and there are various open-source ones readily available (e.g., this one: https://github.com/togatoga/togasat

Comments:

1- I mean, also it depends. I recall seeing an analysis some time ago, that showed CPython can be as fast as C … provided you are almost exclusively using library functions written in C. That being said, for any non-trivial python program it will probably be the case that you must spend quite a bit of time in the interpreter, and not in C library functions.

Why Are There So Many Programming Languages? | Juniors Coders
Popular programming languages

The other answers are mistaken. This is a very common confusion. They describe statically typed language, not strongly typed language. There is a big difference.

Strongly typed vs weakly typed:

In strongly typed languages you get an error if the types do not match in an expression. It does not matter if the type is determined at compile time (static types) or runtime (dynamic types).

Both java and python are strongly typed. In both languages, you get an error if you try to add objects with unmatching types. For example, in python, you get an error if you try to add a number and a string:

  • >>> a = 10 
  • >>> b = “hello” 
  • >>> a + b 
  • Traceback (most recent call last): 
  • File “<stdin>”, line 1, in <module> 
  • TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’ 

In Python, you get this error at runtime. In Java, you would get a similar error at compile time. Most statically typed languages are also strongly typed.

The opposite of strongly typed language is weakly typed. In a weakly typed language, there are implicit type conversions. Instead of giving you an error, it will convert one of the values automatically and produce a result, even if such conversion loses data. This often leads to unexpected and unpredictable behavior.

Javascript is an example of a weakly typed language.

  • > let a = 10 
  • > let b = “hello” 
  • > a + b 
  • ’10hello’ 

Instead of an error, JavaScript will convert a to string and then concatenate the strings.

Static types vs dynamic types:

In a statically typed language, variables are bound types and may only hold data of that type. Typically you declare variables and specify the type of data that the variable has. In some languages, the type can be deduced from what you assign to it, but it still holds that the variable is bound to that type. For example, in java:

  • int a = 3; 
  • a = “hello” // Error, a can only contain integers 

in a dynamically typed language, variables may hold any type of data. The type of the data is simply determined by what gets assigned to the variable at runtime. Python is dynamically typed, for example:

  • a = 10 
  • a = “hello” 
  • # no problem, a first held an integer and then a string 

Comments:

#1: Don’t confuse strongly typed with statically typed.

Python is dynamically typed and strongly typed.
Javascript is dynamically typed and weakly typed.
Java is statically typed and strongly typed.
C is statically typed and weekly typed.

See these articles for a longer explanation:
Magic lies here – Statically vs Dynamically Typed Languages
Key differences between mainly used languages for data science

I also added a drawing that illustrates how strong and static typing relate to each other:

Python is dynamically typed because types are determined at runtime. The opposite of dynamically typed is statically typed (not strongly typed)

Python is strongly typed because it will give errors when types don’t match instead of performing implicit conversion. The opposite of strongly typed is weakly typed

Python is strongly typed and dynamically typed

What is the difference between finalize() and destructor in Java?

Finalize() is not guaranteed to be called and the programmer has no control over what time or in what order finalizers are called.

They are useless and should be ignored.

A destructor is not part of Java. It is a C++ language feature with very precise definitions of when it will be called.

Comments:

1- Until we got to languages like Rust (with the Drop trait) and a few others was C++ the only language which had the destructor as a concept? I feel like other languages were inspired from that.

2- Many others manage memory for you, even predating C: COBOL, FORTRAN and so on. That’s another driver why there isn’t much attention to destructors

What are some ways to avoid writing static helper classes in Java?

Mainly getting out of that procedural ‘function operates on parameters passed in’ mindset.

Tactically, the static can normally be moved onto one of the parameter objects. Or all the parameters become an object that the static moves to. A new object might be needed. Once done the static is now a fully fledged method on an object and is not static anymore.

I view this as a positive iterative step in discovering objects for a system.

For cases where a static makes sense (? none come to mind) then a good practice is to move it closer to where it is used either in the same package or on a class that is strongly related.

I avoid having global ‘Utils’ classes full of statics that are unrelated. That’s fairly basic design, keeping unrelated things separate. In this case, the SOLID ISP principle applies: segregate into smaller, more focused interfaces.

Is there any programming language as easy as python and as fast and efficient as C++, if yes why it’s not used very often instead of C or C++ in low level programming like embedded systems, AAA 2D and 3D video games, or robotic?

Not really. I use Python occasionally for “quick hacks” – programs that I’ll probably run once and then delete – also, because I use “blender” for 3D modeling and Python is it’s scripting language.

I used to write quite a bit of JavaScript for web programming but since WASM came along and allows me to run C++ at very nearly full speed inside a web browser, I write almost zero JavaScript these days.

I use C++ for almost everything.

Once you get to know C++ it’s no harder than Python – the main thing I find great about Python is the number of easy-to-find libraries.

But in AAA games – the poor performance of Python pretty much rules it out.

In embedded systems, the computer is generally too small to fit a Python interpreter into memory – so C or C++ is a more likely choice.

This was actually one of the interview questions I got when I applied at Google.

“Write a function that returns the average of two number.”

So I did, they way you would expect. (x+y)/2. I did it as a C++ template so it works for any kind of number.

interviewer: “What’s wrong with it?”

Well, I suppose there could be an overflow if adding the two numbers requires more than space than the numeric type can hold. So I rewrote it as (x/2) + (y/2).

interviewer: “What’s wrong with it now?”

Well, I think we are losing a little precision by pre-dividing. So I wrote it another way.

interviewer: “What’s wrong with it now?”

And that went on for about 10 minutes. It ended with us talking about the heat death of the universe.

I got the job and ended up working with the guy. He said he had never done that before. He had just wanted to see what would happen.

Comments:

1-

The big problem you get with x/2 + y/2 is that it can/will give incorrect answers for integer inputs. For example, let’s average 3 and 3. The result should obviously be 3.

But with integer division, 3/2 = 1, and 1+1 = 2.

You need to add one to the result if and only if both inputs are odd.

2- Here’s what I’d do in C++ for integers, which I believe does the right thing including getting the rounding direction correct, and it can likely be made into a template that will do the right thing as well. This is not complete code, but I believe it gets the details correct…

Programming - Find the average of 2 numbers
Programming – Find the average of 2 numbers

That will work for any signed or unsigned integer type for op1 and op2 as long as they have the same type.

If you want it to do something intelligently where one of the operands is an unsigned type and the other one is a signed type, you could do it, but you need to define exactly what should happen, and realize that it’s quite likely that for maximum arithmetic correctness, the output type may need to be different than either input type. For instance, the average of a uint32_t and an int32_t can be too large to fit in an int32_t, and it can also be too small to fit in a uint32_t, so you probably need to go with a larger signed integer type, maybe int64_t.

3- I would have answered the question with a question, “Tell me more about the input, error handling capability of your system, and is this typical of the level of challenge here at google?” Then I’d provide eye contact, sit back, and see what happens. Years ago I had an interview question that asked what classical problem was part of a pen plotter control system. I told the interviewer that it was TSP but that if you had to change pens, you had to consider how much time it took to switch. They offered me a job but I declined given the poor financial condition of the company (SGI) which I discovered by asking the interviewer questions of my own. IMO: questions are at the heart of engineering. The interviewer, if they are smart, wants to see if you are capable of discovering the true nature of their problems. The best programmers I’ve ever worked with were able to get to the heart of problems and trade off solutions. Coding is a small part of the required skills.

Yes, they can.

There are features in HTTP to allow many different web sites to be served on a single IP address.

You can, if you are careful, assign the same IP address to many machines (it typically can’t be their only IP address, however, as distinguishable addresses make them much easier to manage).

You can run arbitrary server tasks on your many machines with the same IP address if you have some way of sending client connections to the correct machine. Obviously that can’t be the IP address, because they’re all the same. But there are ways.

However… this needs to be carefully planned. There are many issues. Andrew Mc Gregor

It depends on how you want to store and access data.

For the most part, as a general concept, old school cryptography is obsolete.

It was based on ciphers, which were based on it being mathematically “hard” to crack.

If you can throw a compute cluster at DES, even with a one byte “salt”, it’s pretty easy to crack a password database in seconds. Minutes, if your cluster is small.

Almost all computer security is base on big number theory. Today, that’s called: Law of large numbers – Wikipedia

Averages of repeated trials converge to the expected value An illustration of the law of large numbers using a particular run of rolls of a single die . As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. Although each run would show a distinctive shape over a small number of throws (at the left), over a large number of rolls (to the right) the shapes would be extremely similar. In probability theory , the law of large numbers ( LLN ) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. [1] The LLN is important because it guarantees stable long-term results for the averages of some random events. 
 

What it means is that it’s hard to do math on very large numbers, and so if you have a large one, the larger the better.

Most cryptography today is based on elliptic curves.

But we know by the proof of Fermat’s last theorem, and specifically, the Taniyama-Shimura conjecture, is that all elliptic curves have modular forms.

And so this gives us an attack at all modern cryptogrphay, using graphical mathematics.

It’s an interesting field, and problem space.

Not one I’m interested in solving, since I’m sure it has already been solved by my “associates” who now work for the NSA.

I am only interested in new problems.

Comments:

1- Sorry, but this is just wrong. “Almost all cryptography,” counted by number of bytes encrypted and decrypted, uses AES. AES does not use “large numbers,” elliptic curves, or anything of that sort – it’s essentially combinatorial in nature, with a lot of bit-diddling – though there is some group theory at its based. The same can be said about cryptographic checksums such as the SHA series, including the latest “sponge” constructions.

Where RSA and elliptic curves and such come in is public key cryptography. This is important in setting up connections, but for multiple reasons (performance – but also for excellent cryptographic reasons) is not use for bulk encryption. There are related algorithms like Diffie-Hellman and some signature protocols like DSS. All of these “use large numbers” in some sense, but even that’s pushing it – elliptic curve cryptography involves doing math over … points on an elliptic curve, which does lead you to do some arithmetic, but the big advantage of elliptic curves is that the numbers are way, way smaller than for, say, RSA for equivalent security.

Much research these days is on “post-quantum cryptography” – cryptography that is secure against attacks by quantum computers (assuming we ever make those work). These tend not to be based on “arithmetic” in any straightforward sense – the ones that seem to be at the forefront these days are based on computation over lattices.

Cracking a password database that uses DES is so far away from what cryptography today is about that it’s not even related. Yes, the original Unix implementations – almost 50 years ago – used that approach. So?

C++ lambda functions are syntactic sugar for a longstanding set of practices in both C and C++: passing a function as an argument to another function, and possibly connecting a little bit of state to it.

This goes way back. Look at C’s qsort():

C++ Function example

That last argument is a function pointer to a comparison function. You could use a captureless lambda for the same purpose in modern C++.

Sometimes, you want to tack a little bit of extra state alongside the function. In C, one way to do this is to provide an additional context pointer alongside the the function pointer. The context pointer will get passed back to the function as an argument.

I give an extended example in here:

In C++, that context pointer can be this. When you do that, you have something called a function object. (Side note: function objects were sometimes called functors; however, functors aren’t really the same thing.)

If you overload the function call operator for a particular class, then objects of that class behave as function objects. That is, you can pretend like the object is a function by putting parentheses and an argument list after the name of an instance! When you arrive at the overloaded operator implementation, this will point at the instance.

Instances of this class will add an offset to an integer. The function call operator is operator() below.

and to use it:

C++ Class Offset

That’ll print out the numbers 42, 43, 44, … 51 on separate lines.

And tying this back to the qsort() example from earlier: C++’s std::sort can take a function object for its comparison operator.

Modern C++’s lambda functions are syntactic sugar for function objects. They declare a class with an unutterable name, and then give you an instance of that class. Under the hood, the class’ constructor implements the capture, and initializes any state variables.

Other languages have similar constructs. I believe this one originated in LISP. It goes waaaay back.

As for any challenges associated with them: lifetime management. You potentially introduce a non-nested lifetime for any state associated with the callback, function object, or lambda.

If it’s all self contained (i.e. it keeps its own copies of everything), you’re less likely to have a problem. It owns all the state it relies on.

If it has non-owning pointers or references to other objects, you need to ensure the lifetime of your callback/function object/lambda remains within the lifetime of that other non-owned object. If that non-owned object’s lifetime isn’t naturally a superset of the callback/function object/lambda, you should consider taking a copy of that object, or reconsider your design.

Each one has specific strengths in terms of syntax features.

But the way to look at this is that all three are general purpose programming languages. You can write pretty much anything in them.

Trying to rank these languages in some kind of absolute hierarchy makes no sense and only leads to tribal ‘fanboi’ arguments.

If you need part of your code to talk to hardware, or could benefit from taking control of memory management, C++ is my choice.

General web service stuff, Java has an edge due to familiarity.

Anything involving a pre existing Microsoft component – eg data in SQL server, Azure – I will go all in on C#

I see more similarity than difference overall

Visual Studio Code is OK if you can’t find anything better for the language you’re using. There are better alternatives for most popular languages.

C# – Use Visual Studio Community, it’s free, and far better than Visual Studio Code.

Java – Use IntelliJ

Go – Goland.

Python – PyCharm.

C or C++ – CLion.

If you’re using a more unusual language, maybe Rust, Visual Studio Code might be a good choice.

Comments:

#1: Just chipping in here. I used to be a massive visual studio fan boy and loved my fancy gui for doing things without knowing what was actually happening. I’ve been using vscode and Linux for a few years now and am really enjoying the bare metal exposure you get with working on it (and linux) typing commands is way faster to get things done than mouse clicking through a bunch of guis. Both are good though.

#2:  C# is unusual in that it’s the only language which doesn’t follow the maxim, “if JetBrains have blessed your language with attention, use their IDE”.

Visual Studio really is first class.

#3: for Rust as long as you have rust-analyzer and clippy, you’re good to go. Vim with lua and VS Code both work perfectly.

#4: This is definitely skirting the realm of opinion. It’s a great piece of software. There is better and worse stuff but it all depends upon the person using it, their skill, and style of development.

#5: VSCode is excellent for coding. I’ve been using it for about 6 years now, mainly for Python work, but also developing JS based mobile apps. I mainly use Visual Studio, but VSC’s slightly stripped back nature has been embellished with plenty of updates and more GUI discovery methods, plus that huge extensions library (I’ve worked with the creation of an intellisense style plugin as well).

I’m personally a fan of keeping it simple on IDEs, and I work in a lot of languages. I’m not installing 6 or 7 IDEs because they apparently have advantages in that specific language, so I’d rather install one IDE which can do a credible job on all of them.

I’m more a fan of developing software than getting anally retentive about knowing all the keyboard shortcuts to format a source file. Life’s too short for that. Way too short!

To each their own. Enjoy whatever you use!

Dmitry Aliev is correct that this was introduced into the language before references.

I’ll take this question as an excuse to add a bit more color to this.

C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this pointer parameter to member functions. E.g.:

Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

was translated to something like:

  • int f__1S(S *this); 

(the funny name f__1S is just an example of a possible “mangling” of the name of S::f, which allows traditional linkers to deal with the richer naming environment of C++).

What might comes as a surprise to the modern C++ programmer is that in that model this is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this remained valid in constructors and destructors even though this had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.

When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:

 
Why is C++ "this" a pointer and not a reference?
Why is C++ “this” a pointer and not a reference?

In other words, the expression this is now effectively a kind of alias for &__this, where __this is just a name I made up for an unnamable implicit reference parameter.

C++11 further tweaked this by introducing syntax to control the kind of reference that this is bound from. E.g.,

struct S

That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this:

C++_pointer_and_not_reference5b

 
 

After that language feature was released, we started getting many reports of buggy programs that “captured” this thinking they captured the class value, when instead they really wanted to capture __this (or *this). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:

  • we introduced the ability to capture *this
  • we allowed [=, this] since now [this] is really a “by reference” capture of *this
  • even though [this] was now a “by reference” capture, we left in the ability to write [&, this], despite it being redundant (compatibility with earlier standards)

Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):

Why is C++ "this" a pointer and not a reference?

In that example, the “object parameter” (i.e., the previously hidden reference parameter __this) is now an explicit parameter and it is no longer a reference!

Here is another example (also from the paper):

 

Why is C++ "this" a pointer and not a reference?

Here:

  • the type of the object parameter is a deducible template-dependent type
  • the deduction actually allows a derived type to be found

This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.

It adds some extra steps in design, testing and deployment for sure. But it can buy you an easier path to scalability and an easier path to fault tolerance and live system upgrades.

It’s not REST itself that enables that. But if you use REST you will have split your code up into independently deployable chunks called services.

So more development work to do, yes, but you get something a single monolith can’t provide. If you need that, then the REST service approach is a quick way to doing it.

We must compare like for like in terms of results for questions like this.

Because at the time, there was likely no need.

Based on what I could find, the strtok library function appeared in System III UNIX some time in 1980.

In 1980, memory was small, and programs were single threaded. I don’t know whether UNIX had any support for multiple processors, even. I think that happened a few years later.

Its implementation was quite simple.

Why didn't the C library designers make strtok() explicitly store the state to allow working on multiple strings at the same time?

 

This was 3 years before they started the standardization process, and 9 years before it was standardized in ANSI C.

This was simple and good enough, and that’s what mattered most. It’s far from the only library function with internal state.

And Lex/YACC took over more complex scanning and parsing tasks, so it probably didn’t get a lot of attention for the lightweight uses it was put to.

For a tongue-in-cheek take on how UNIX and C were developed, read this classic:

 
The Rise of “Worse is Better” By Richard Gabriel I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase “the right thing.” To such a designer it is important to get all of the following characteristics right: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation. · Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed. · Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness. I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the “MIT approach.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation. The worse-is-better philosophy is only slightly different: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design. · Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct. · Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface. Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the “New Jersey approach.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach. Let me start out by retelling a story that shows that the MIT/New-Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better.
 
 

Because the ‘under the hood’ code is about 50 years old. I’m not kidding. I worked on some video poker machines that were made in the early 1970’s.

Here’s how they work.

You have an array of ‘cards’ from 0 to 51. Pick one at random. Slap it in position 1 and take it out of your array. Do the same for the next card … see how this works?

Video poker machines are really that simple. They literally simulate a deck of cards.

Anything else, at least in Nevada, is illegal. Let me rephrase that, it is ILLEGAL, in all caps.

If you were to try to make a video poker game (or video keno, or slot machine) in any other way than as close to truly random selection from an ‘array’ of options as you can get, Nevada Gaming will come after you so hard and fast, your third cousin twice removed will have their ears ring for a week.

That is if the Families don’t get you first, and they’re far less kind.

All the ‘magic’ is in the payout tables, which on video poker and keno are literally posted on every machine. If you can read them, you can figure out exactly what the payout odds are for any machine.

There’s also a little note at the bottom stating that the video poker machine you’re looking at uses a 52 card deck.

Comments:

1- I have a slot machine and the code on the odds chip looks much like an excel spread sheet every combination is displayed in this spread sheet, so the exact odds can be listed an payout tables. The machine picks a random number. Let say 452 in 1000. the computer looks at the spread sheet and says that this is the combination of bar bar 7 and you get 2 credits for this combination. The wheels will spin to match the indication on the spread sheet. If I go into the game diagnostics I can see if it is a win or not, you do not win on what the wheels display, but the actual number from the spread sheet. The games knows if you won or lost before the wheels stop.

2- I had a conversation with a guy who had retired from working in casino security. He was also responsible for some setup and maintenance on slot machines, video poker and others. I asked about the infamous video poker machine that a programmer at the manufacturer had put in a backdoor so he and a few pals could get money. That was just before he’d started but he knew how it was done. IIRC there was a 25 step process of combinations of coin drops and button presses to make the machine hit a royal flush to pay the jackpot.

Slot machines that have mechanical reels actually run very large virtual reels. The physical reels have position encoders so the electronics and software can select which symbol to stop on. This makes for far more possible combinations than relying on the space available on the physical reels.

Those islands of machines with the sign that says 95% payout? Well, you guess which machine in the group is set to that payout % while the rest are much closer to the minimum allowed.

Machines with a video screen that gives you a choice of things to select by touch or button press? It doesn’t matter what you select, the outcome is pre-determined. For example, if there’s a grid of spots and the first three matches you get determines how many free spins you get, if the code stopped on giving you 7 free spins, out of a possible maximum of 25, you’re getting 7 free spins no matter which spots you touch. It will tease you with a couple of 25s, a 10 or 15 or two, but ultimately you’ll get three 7s, and often the 3rd 25 will be close to the other two or right next to the last 7 “you” selected to make you feel like you just missed it when the full grid is briefly revealed.

There was a Discovery Channel show where the host used various power tools to literally hack things apart to show their insides and how they worked. In one episode he sawed open a couple of slot machines, one from the 1960’s and a purely mechanical one from the 1930’s or possibly 1940’s. In that old machine he discovered the casino it had been in decades prior had installed a cheat. There was a metal wedge bolted into the notch for the 7 on one reel so it could never hit the 777 jackpot. I wondered if the Nevada Gaming Commission could trace the serial number and if they could levy a fine if the company that had owned and operated it was still in business.

3- Slightly off-topic. I worked for a company that sold computer hardware, one of our customers was the company that makes gambling machines. They said that they spent close to $0 on software and all their budget on licensing characters

This question is like asking why you would ever use int when you have the Integer class. Java programmers seem especially zealous about everything needing to be wrapped, and wrapped, and wrapped.

Yes, ArrayList<Integer> does everything that int[] does and more… but sometimes all you need to do is swat a fly, and you just need a flyswatter, not a machine-gun.

Did you know that in order to convert int[] to ArrrayList<Integer>, the system has to go through the array elements one at a time and box them, which means creating a garbage-collected object on the heap (i.e. Integer) for each individual int in the array? That’s right; if you just use int[], then only one memory alloc is needed, as opposed to one for each item.

I understand that most Java programmers don’t know about that, and the ones who do probably don’t care. They will say that this isn’t going to be the reason your program is running slowly. They will say that if you need to care about those kinds of optimizations, then you should be writing code in C++ rather than Java. Yadda yadda yadda, I’ve heard it all before. Personally though, I think that you should know, and should care, because it just seems wasteful to me. Why dynamically allocate n individual objects when you could just have a contiguous block in memory? I don’t like waste.

I also happen to know that if you have a blasé attitude about performance in general, then you’re apt to be the sort of programmer who unknowingly, unnecessarily writes four nested loops and then has no idea why their program took ten minutes to run even though the list was only 100 elements long. At that point, not even C++ will save you from your inefficiently written code. There’s a slippery slope here.

I believe that a software developer is a sort of craftsman. They should understand their craft, not only at the language level, but also how it works internally. They should convert int[] to ArrayList<Integer> only because they know the cost is insignificant, and they have a particular reason for doing so other than “I never use arrays, ArrayList is better LOL”.

Very similar, yes.

Both languages feature:

  • Static typing
  • nominative interface typing
  • garbage collection
  • class based
  • single dispatch polymorphism

so whilst syntax differs, the key things that separate OO support across languages are the same.

There are differences but you can write the same design of OO program in either language and it won’t look out of place

Last time I needed to write an Android app, even though I already knew Java, I still went with Kotlin 😀

I’d rather work in a language I don’t know than… Java… and yes, I know a decent Java IDE can auto-generate this code – but this only solves the problem of writing the code, it doesn’t solve the problem of having to read it, which happens a lot more than writing it.

I mean, which of the below conveys the programmer’s intent more clearly, and which one would you rather read when you forget what a part of the program does and need a refresher:

Even if both of them required no effort to write… the Java version is pure brain poison…

Because it’s insufficient to deal with the memory semantics of current computers. In fact, it was obsolete almost as soon as it first became available.

Volatile tells a compiler that it may not assume the value of a memory location has not changed between reads or writes. This is sometimes sufficient to deal with memory-mapped hardware registers, which is what it was originally for.

But that doesn’t deal with the semantics of a multiprocessor machine’s cache, where a memory location might be written and read from several different places, and we need to be sure we know when written values will be observable relative to control flow in the writing thread.

Instead, we need to deal with acquire/release semantics of values, and the compilers have to output the right machine instructions that we get those semantics from the real machines. So, the atomic memory intrinsics come to the rescue. This is also why inline assembler acts as an optimization barrier; before there were intrinsics for this, it was done with inline assembler. But intrinsics are better, because the compiler can still do some optimization with them.

C++ is a programming language specified through a standard that is “abstract” in various ways. For example, that standard doesn’t currently formally recognize a notion of “runtime” (I would actually like to change that a little bit in the future, but we’ll see).

Now, in order to allow implementations to make assumptions it removes certain situations from the responsibility of the implementation. For example, it doesn’t require (in general) that the implementation ensure that accesses to objects are within the bounds of those objects. By dropping that requirement, the code for valid accesses can be more efficient than would be required if out-of-bounds situations were the responsibility of the implementation (as is the case in most other modern programming languages). Those “situations” are what we call “undefined behaviour”: The implementation has no specific responsibilities and so the standard allows “anything” to happen. This is in part why C++ is still very successful in applications that call for the efficient use of hardware resources.

Note, however, that the standard doesn’t disallow an implementation from doing something that is implementation-specified in those “undefined behaviour” situations. It’s perfectly all right (and feasible) for a C++ implementation to be “memory safe” for example (e.g., not attempt access outside of object bounds). Such implementations have existed in the past (and might still exist, but I’m not currently aware of one that completely “contains” undefined behaviour).

ADDENDUM (July 16th, 2021):

The following article about undefined behavior crossed my metaphorical desk today:

To Conclude:

Coding is a process of translating and transforming a problem into a step by step set of instructions for a machine. Just like every skill, it requires time and practice to learn coding. However, by following some simple tips, you can make the learning process easier and faster. First, it is important to start with the basics. Do not try to learn too many programming languages at once. It is better to focus on one language and master it before moving on to the next one. Second, make use of resources such as books, online tutorials, and coding bootcamps. These can provide you with the structure and support you need to progress quickly. Finally, practice regularly and find a mentor who can offer guidance and feedback. By following these tips, you can develop the programming skills you need to succeed in your career.

There are plenty of resources available to help you improve your coding skills. Check out some of our favorite coding tips below:

– Find a good code editor and learn its shortcuts. This will save you time in the long run.
– Do lots of practice exercises. It’s important to get comfortable with the syntax and structure of your chosen programming language.
– Get involved in the coding community. There are many online forums and groups where programmers can ask questions, share advice, and collaborate on projects.
– Read code written by experienced developers. This will give you insight into best practices and advanced techniques.

What are the Greenest or Least Environmentally Friendly Programming Languages?




https://youtube.com/@djamgatech

How does a database handle pagination?

How does a database handle pagination?

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

How does a database handle pagination?

How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.

Second, pagination is generally a function of the front-end and/or middleware, not the database layer.

But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.

That may not be the most efficient or effective implementation, though.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

How does a database handle pagination?

So how do you propose pagination should be done?

On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.

Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.

What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?

I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.

With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.

More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)

It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

 

Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.

I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.

But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?

If it does the full fetch every time, then it seems quite inefficient.

If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?

something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.

Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.

YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.


From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.


And some theory – see this answer to know what is purpose of Pagination pattern

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Machine Learning Engineer Interview Questions and Answers

Summary of Machine Learning and AI Capabilities

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Machine Learning Engineer Interview Questions and Answers

Learning: Supervised, Unsupervised, Reinforcement Learning

What is Machine Learning?

Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data. You select a model to train and then manually perform feature extraction. Used to devise complex models and algorithms that lend themselves to a prediction which in commercial use is known as predictive analytics.

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

Below are the most common Machine Learning use cases and capabilities:

Summary of ML/AI Capabilities

Machine Learning For Dummies App

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

Machine Learning For Dummies

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

What is Supervised Learning? 

Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples.

Algorithms: Support Vector Machines, Regression, Naive Bayes, Decision Trees, K-nearest Neighbor Algorithm and Neural Networks

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

Example: If you built a fruit classifier, the labels will be “this is an orange, this is an apple and this is a banana”, based on showing the classifier examples of apples, oranges and bananas.

What is Unsupervised learning?

Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.

Algorithms: Clustering, Anomaly Detection, Neural Networks and Latent Variable Models

Example: In the same example, a fruit clustering will categorize as “fruits with soft skin and lots of dimples”, “fruits with shiny hard skin” and “elongated yellow fruits”.

Explain the difference between supervised and unsupervised machine learning?

In supervised machine learning algorithms, we have to provide labeled data, for example, prediction of stock market prices, whereas in unsupervised we need not have labeled data, for example, classification of emails into spam and non-spam.

What is deep learning, and how does it contrast with other machine learning algorithms?

Deep learning is a subset of machine learning that is concerned with neural networks: how to use backpropagation and certain principles from neuroscience to more accurately model large sets of unlabelled or semi-structured data. In that sense, deep learning represents an unsupervised learning algorithm that learns representations of data through the use of neural nets.

What is Problem Formulation in Machine Learning?

The problem formulation phase of the ML Pipeline is critical, and it’s where everything begins. Typically, this phase is kicked off with a question of some kind. Examples of these kinds of questions include: Could cars really drive themselves?  What additional product should we offer someone as they checkout? How much storage will clients need from a data center at a given time? 

The problem formulation phase starts by seeing a problem and thinking “what question, if I could answer it, would provide the most value to my business?” If I knew the next product a customer was going to buy, is that most valuable? If I knew what was going to be popular over the holidays, is that most valuable? If I better understood who my customers are, is that most valuable?

However, some problems are not so obvious. When sales drop, new competitors emerge, or there’s a big change to a company/team/org, it can be easy to say, “I see the problem!” But sometimes the problem isn’t so clear. Consider self-driving cars. How many people think to themselves, “driving cars is a huge problem”? Probably not many. In fact, there isn’t a problem in the traditional sense of the word but there is an opportunity. Creating self-driving cars is a huge opportunity. That doesn’t mean there isn’t a problem or challenge connected to that opportunity. How do you design a self-driving system? What data would you look at to inform the decisions you make? Will people purchase self-driving cars?

Djamgatech: Build the skills that’ll drive your career into six figures: Get Djamgatech.

Part of the problem formulation phase includes seeing where there are opportunities to use machine learning.  

To formulate a problem in ML, consider the following questions:

  1. Is machine learning appropriate for this problem, and why or why not?
  2. What is the ML problem if there is one, and what would a success metric look like?
  3. What kind of ML problem is this?
  4. Is the data appropriate?

Machine Learning Problem Formulation Examples:

1)  Amazon recently began advertising to its customers when they visit the company website. The Director in charge of the initiative wants the advertisements to be as tailored to the customer as possible. You will have access to all the data from the retail webpage, as well as all the customer data.

  • ML is appropriate because of the scale, variety and speed required. There are potentially thousands of ads and millions of customers that need to be served customized ads immediately as they arrive to the site.
  • The problem is ads that are not useful to customers are a wasted opportunity and a nuisance to customers, yet not serving ads at all is a wasted opportunity. So how does Amazon serve the most relevant advertisements to its retail customers?
    1. Success would be the purchase of a product that was advertised.
  • This is a supervised learning problem because we have a labeled data point, our success metric, which is the purchase of a product.
  • This data is appropriate because it is both the retail webpage data as well as the customer data.

What are the different Algorithm techniques in Machine Learning?

The different types of techniques in Machine Learning are
● Supervised Learning
● Unsupervised Learning
● Semi-supervised Learning
● Reinforcement Learning
● Transduction
● Learning to Learn

Machine Learning Engineer Interview Questions and Answers
Different Types of Machine Learning with examples

 

Machine Learning For Dummies App

Machine Learning For Dummies
Machine Learning For Dummies

 

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What’s the difference between a generative and discriminative model?

A generative model will learn categories of data while a discriminative model will simply learn the distinction between different categories of data. Discriminative models will generally outperform generative models on classification tasks.

What Are the Applications of Supervised Machine Learning in Modern Businesses?

Ace the Microsoft Azure Fundamentals AZ-900 Certification Exam: Pass the Azure Fundamentals Exam with Ease

Applications of supervised machine learning include:
Email Spam Detection
Here we train the model using historical data that consists of emails categorized as spam or not spam. This labeled information is fed as input to the model.
Healthcare Diagnosis
By providing images regarding a disease, a model can be trained to detect if a person is suffering from the disease or not.
Sentiment Analysis
This refers to the process of using algorithms to mine documents and determine whether they’re positive, neutral, or negative in sentiment.
Fraud Detection
Training the model to identify suspicious patterns, we can detect instances of possible fraud.

What Is Semi-supervised Machine Learning?

Supervised learning uses data that is completely labeled, whereas unsupervised learning uses no training data.
In the case of semi-supervised learning, the training data contains a small amount of labeled data and a large amount of unlabeled data.

What Are Unsupervised Machine Learning Techniques?

There are two techniques used in unsupervised learning: clustering and association.

Clustering
● Clustering problems involve data to be divided into subsets. These subsets, also called clusters, contain data that are similar to each other. Different clusters reveal different details about the objects, unlike classification or regression.

Association
● In an association problem, we identify patterns of associations between different variables or items.
● For example, an eCommerce website can suggest other items for you to buy, based on the prior purchases that you have made, spending habits, items in your wish list, other customers’ purchase habits, and so on.

What evaluation approaches would you work to gauge the effectiveness of a machine learning model?

You would first split the dataset into training and test sets, or perhaps use cross-validation techniques to further segment the dataset into composite sets of training and test sets within the data. You should then implement a choice selection of performance metrics: here is a fairly comprehensive list. You could use measures such as the F1 score, the accuracy, and the confusion matrix. What’s important here is to demonstrate that you understand the nuances of how a model is measured and how to choose the right performance measures for the right situations.

What Are the Three Stages of Building a Model in Machine Learning?

The three stages of building a machine learning model are:
● Model Building Choose a suitable algorithm for the model and train it according to the requirement
● Model Testing Check the accuracy of the model through the test data
● Applying the Mode Make the required changes after testing and use the final model for real-time projects. Here, it’s important to remember that once in a while, the model needs to be checked to make sure it’s working correctly. It should be modified to make sure that it is up-to-date.

A data scientist wants to visualize the correlation between features in their dataset. What tool(s) can they use to visualize this in a correlation matrix? 

Answer: Matplotlib, Seaborn

You are preprocessing a dataset that includes categorical features. You want to determine which categories of particular features are most common in your dataset. Which basic descriptive statistic could you use?
Answer: Mode

What are some examples of categorical features?

In machine learning and data science, categorical features are variables that can take on one of a limited number of values. For example, a categorical feature might represent the color of a car as Red, Yellow, or Blue. In general, categorical features are used to represent discrete characteristics (such as gender, race, or profession) that can be sorted into categories. When working with categorical features, it is often necessary to convert them into numerical form so that they can be used by machine learning algorithms. This process is known as encoding, and there are several different ways to encode categorical features. One common approach is to use a technique called one-hot encoding, which creates a new column for each possible category. For example, if there are three colors (Red, Yellow, and Blue), then each color would be represented by a separate column where all the values are either 0 or 1 (1 indicates that the row belongs to that category). Machine learning algorithms can then treat each column as a separate feature when training the model. Other approaches to encoding categorical data include label encoding and target encoding. These methods are often used in conjunction with one-hot encoding to improve the accuracy of machine learning models.

How many variables are enough for multiple regressions?

Which of the following is most suitable for supervised learning?

 
Answer: Identifying birds in an image
 
 
 

You’ve plotted the correlation matrix of your dataset’s features and realized that two of the features present a high negative correlation (-0.95). What should you do?

Answer: Remove one of the features

You are in charge of preprocessing the data your publishing company wants to use for a new ML model they’re building, which aims to predict the influence an academic journal will have in its field. The preprocessing step is necessary to prepare the data for model training. What type of issue with the data might you encounter during this preprocessing phase? 

Answer: Outliers, Missing values

A Machine Learning Engineer is creating and preparing data for a linear regression model. However, while preparing the data, the Engineer notices that about 20% of the numerical data contains missing values in the same two columns. The shape of the data is 500 rows by 4 columns, including the target column.
How can the Engineer handle the missing values in the data?

(Select TWO.)
 
 
 

Answer: Fill he missing values with mean of the column, Impute the missing values using regression

A Data Scientist created a correlation matrix between nine variables and the target variable. The correlation coefficient between two of the numerical variables, variable 1 and variable 5, is -0.95. How should the Data Scientist interpret the correlation coefficient?

Answer: As variable 1 increases, variable 5 decreases

An advertising and analytics company uses machine learning to predict user response to online advertisements using a custom XGBoost model. The company wants to improve its ML pipeline by porting its training and inference code, written in R, to Amazon SageMaker, and do so with minimal changes to the existing code.

Answer: Use the Build Your Own Container (BYOC) Amazon Sagemaker option.
Create a new docker container with the existing code. Register the container in Amazon Elastic Container registry. with the existing code. Register the container in Amazon Elastic Container Registry. Finally run the training and inference jobs using this container.

An ML engineer at a text analytics startup wants to develop a text classification model. The engineer collected large amounts of data to develop a supervised text classification model. The engineer is getting 99% accuracy on the dataset but when the model is deployed to production, it performs significantly worse. What is the most likely cause of this?

Answer: The engineer did not split the data to validate the model on unseen data.

For a classification problem, what does the loss function measure?
Answer: A loss function measures how accurate your prediction is with respect to the true values.

Gradient Descent is an important optimization method. What are 3 TRUE statements about the gradient descent method?

(Select THREE)
 
 

Answer: It tries to find the minimum of a loss function. It can involve multiple iterations
It uses learning rate to multiply the effect of gradients

What is Deep Learning?

Deep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the neurons in the human brain.

Machine Learning Engineer Interview Questions and Answers

Machine Learning For Dummies App

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is the difference between machine learning and deep learning?

Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorized in the following four categories.
1. Supervised machine learning,
2. Semi-supervised machine learning,
3. Unsupervised machine learning,
4. Reinforcement learning.

Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.

• The main difference between deep learning and machine learning is due to the way data is
presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks).

• Machine learning algorithms are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets. However, when the result is incorrect, there is a need to “teach them”. Because machine learning algorithms require bulleted data, they are not suitable for solving complex queries that involve a huge amount of data.

• Deep learning networks do not require human intervention, as multilevel layers in neural
networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes. However, even they can be wrong if the data quality is not good enough.

• Data decides everything. It is the quality of the data that ultimately determines the quality of the result.

• Both of these subsets of AI are somehow connected to data, which makes it possible to represent a certain form of “intelligence.” However, you should be aware that deep learning requires much more data than a traditional machine learning algorithm. The reason for this is that deep learning networks can identify different elements in neural network layers only when more than a million data points interact. Machine learning algorithms, on the other hand, are capable of learning by pre-programmed criteria.

Can you explain the differences between supervised, unsupervised, and reinforcement learning?

In supervised learning, we train a model to learn the relationship between input data and output
data. We need to have labeled data to be able to do supervised learning.
With unsupervised learning, we only have unlabeled data. The model learns a representation of the data. Unsupervised learning is frequently used to initialize the parameters of the model when we have a lot of unlabeled data and a small fraction of labeled data. We first train an unsupervised model and, after that, we use the weights of the model to train a supervised model. In reinforcement learning, the model has some input data and a reward depending on the output of the model. The model learns a policy that maximizes the reward. Reinforcement learning has been applied successfully to strategic games such as Go and even classic Atari video games.

What is the reason for the popularity of Deep Learning in recent times? 

Now although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons:
• The increase in the amount of data generated through various sources
• The growth in hardware resources required to run these models
GPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

Machine Learning For Dummies App

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

 

What is reinforcement learning?

 

Reinforcement Learning allows to take actions to max cumulative reward. It learns by trial and error through reward/penalty system. Environment rewards agent so by time agent makes better decisions.
Ex: robot=agent, maze=environment. Used for complex tasks (self-driving cars, game AI).

 

RL is a series of time steps in a Markov Decision Process:

 

1. Environment: space in which RL operates
2. State: data related to past action RL took
3. Action: action taken
4. Reward: number taken by agent after last action
5. Observation: data related to environment: can be visible or partially shadowed

 

Explain Ensemble learning.

In ensemble learning, many base models like classifiers and regressors are generated and combined together so that they give better results. It is used when we build component classifiers that are accurate and independent. There are sequential as well as parallel ensemble methods.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What are the parametric models? Give an example.

Parametric models are those with a finite number of parameters. To predict new data, you only need to know the parameters of the model. Examples include linear regression, logistic regression, and linear SVMs.
Non-parametric models are those with an unbounded number of parameters, allowing for more flexibility. To predict new data, you need to know the parameters of the model and the state of the data that has been observed. Examples include decision trees, k-nearest neighbors, and topic models using latent Dirichlet analysis.

What are support vector machines?

 

Support vector machines are supervised learning algorithms used for classification and regression analysis.

What is batch statistical learning?

Statistical learning techniques allow learning a function or predictor from a set of observed data that can make predictions about unseen or future data. These techniques provide guarantees on the performance of the learned predictor on the future unseen data based on a statistical assumption on the data generating process.

 

What Will Happen If the Learning Rate is Set inaccurately (Too Low or Too High)? 

 

When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point.
If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).

 

What Is The Difference Between Epoch, Batch, and Iteration in Deep Learning? 

 

Epoch – Represents one iteration over the entire dataset (everything put into the training model).
Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches.
Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).

 

Why Is Tensorflow the Most Preferred Library in Deep Learning?

Tensorflow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and Torch. Tensorflow supports both CPU and GPU computing devices.

What Do You Mean by Tensor in Tensorflow?

A tensor is a mathematical object represented as arrays of higher dimensions. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

Explain a Computational Graph.

Everything in TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates, Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

Cognition: Reasoning on top of data (Regression, Classification, Pattern Recognition)

What is the difference between classification and regression?

Classification is used to produce discrete results, classification is used to classify data into some specific categories. For example, classifying emails into spam and non-spam categories.
Whereas, We use regression analysis when we are dealing with continuous data, for example predicting stock prices at a certain point in time.

 

Explain the Bias-Variance Tradeoff.

Predictive models have a tradeoff between bias (how well the model fits the data) and variance (how much the model changes based on changes in the inputs).
Simpler models are stable (low variance) but they don’t get close to the truth (high bias).
More complex models are more prone to overfitting (high variance) but they are expressive enough to get close to the truth (low bias). The best model for a given problem usually lies somewhere in the middle.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is the difference between stochastic gradient descent (SGD) and gradient descent (GD)?

Both algorithms are methods for finding a set of parameters that minimize a loss function by evaluating parameters against data and then making adjustments.
In standard gradient descent, you’ll evaluate all training samples for each set of parameters.
This is akin to taking big, slow steps toward the solution.
In stochastic gradient descent, you’ll evaluate only 1 training sample for the set of parameters before updating them. This is akin to taking small, quick steps toward the solution.

How Can You Choose a Classifier Based on a Training Set Data Size?

When the training set is small, a model that has a right bias and low variance seems to work better because they are less likely to overfit. For example, Naive Bayes works best when the training set is large. Models with low bias and high variance tend to perform better as they work fine with complex relationships. 

 

Explain Latent Dirichlet Allocation (LDA)

Latent Dirichlet Allocation (LDA) is a common method of topic modeling, or classifying documents by subject matter.
LDA is a generative model that represents documents as a mixture of topics that each have their own probability distribution of possible words.
The “Dirichlet” distribution is simply a distribution of distributions. In LDA, documents are distributions of topics that are distributions of words.

Explain Principle Component Analysis (PCA)

PCA is a method for transforming features in a dataset by combining them into uncorrelated linear combinations.
These new features, or principal components, sequentially maximize the variance represented (i.e. the first principal component has the most variance, the second principal component has the second most, and so on).
As a result, PCA is useful for dimensionality reduction because you can set an arbitrary variance cutoff.

PCA is a dimensionality reduction technique that enables you to identify the correlations and patterns in the dataset so that it can be transformed into a dataset of significantly lower dimensions without any loss of important information.

• It is an unsupervised statistical technique used to examine the interrelations among a set of variables. It is also known as a general factor analysis where regression determines a line of best fit.

• It works on a condition that while the data in a higher-dimensional space is mapped to data in a lower dimension space, the variance or spread of the data in the lower dimensional space should be maximum.

PCA is carried out in the following steps

1. Standardization of Data
2. Computing the covariance matrix
3. Calculation of the eigenvectors and eigenvalues
4. Computing the Principal components
5. Reducing the dimensions of the Data.

a close up of text on a white background

Reference: Here

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What’s the F1 score? How would you use it?

The F1 score is a measure of a model’s performance. It is a weighted average of the precision and recall of a model, with results tending to 1 being the best, and those tending to 0 being the worst. You would use it in classification tests where true negatives don’t matter much.

When should you use classification over regression?

Classification produces discrete values and dataset to strict categories, while regression gives you continuous results that allow you to better distinguish differences between individual points.
You would use classification over regression if you wanted your results to reflect the belongingness of data points in your dataset to certain explicit categories (ex: If you wanted to know whether a name was male or female rather than just how correlated they were with male and female names.)

How do you ensure you’re not overfitting with a model?

This is a simple restatement of a fundamental problem in machine learning: the possibility of overfitting training data and carrying the noise of that data through to the test set, thereby providing inaccurate generalizations.
There are three main methods to avoid overfitting:
1- Keep the model simpler: reduce variance by taking into account fewer variables and parameters, thereby removing some of the noise in the training data.
2- Use cross-validation techniques such as k-folds cross-validation.
3- Use regularization techniques such as LASSO that penalize certain model parameters if they’re likely to cause overfitting.

How Will You Know Which Machine Learning Algorithm to Choose for Your Classification Problem?

While there is no fixed rule to choose an algorithm for a classification problem, you can follow these guidelines:
● If accuracy is a concern, test different algorithms and cross-validate them
● If the training dataset is small, use models that have low variance and high bias
● If the training dataset is large, use models that have high variance and little bias

Why is Area Under ROC Curve (AUROC) better than raw accuracy as an out-of-sample evaluation metric?

AUROC is robust to class imbalance, unlike raw accuracy.
For example, if you want to detect a type of cancer that’s prevalent in only 1% of the population, you can build a model that achieves 99% accuracy by simply classifying everyone has cancer-free.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What are the advantages and disadvantages of neural networks?

Advantages: Neural networks (specifically deep NNs) have led to performance breakthroughs for unstructured datasets such as images, audio, and video. Their incredible flexibility allows them to learn patterns that no other ML algorithm can learn.
Disadvantages: However, they require a large amount of training data to converge. It’s also difficult to pick the right architecture, and the internal “hidden” layers are incomprehensible.

Define Precision and Recall.

Precision
● Precision is the ratio of several events you can correctly recall to the total number of events you recall (mix of correct and wrong recalls).
● Precision = (True Positive) / (True Positive + False Positive)
Recall
● A recall is the ratio of a number of events you can recall the number of total events.
● Recall = (True Positive) / (True Positive + False Negative)

Model Evaluation with test data set

image

What Is Decision Tree Classification?

A decision tree builds classification (or regression) models as a tree structure, with datasets broken up into ever-smaller subsets while developing the decision tree, literally in a tree-like way with branches and nodes. Decision trees can handle both categorical and numerical data.

What Is Pruning in Decision Trees, and How Is It Done?

Pruning is a technique in machine learning that reduces the size of decision trees. It reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting.
Pruning can occur in:
● Top-down fashion. It will traverse nodes and trim subtrees starting at the root
● Bottom-up fashion. It will begin at the leaf nodes
There is a popular pruning algorithm called reduced error pruning, in which:
● Starting at the leaves, each node is replaced with its most popular class
● If the prediction accuracy is not affected, the change is kept
● There is an advantage of simplicity and speed

What Is a Recommendation System?

Anyone who has used Spotify or shopped at Amazon will recognize a recommendation system:
It’s an information filtering system that predicts what a user might want to hear or see based on choice patterns provided by the user.

What Is Kernel SVM?

Kernel SVM is the abbreviated version of the kernel support vector machine. Kernel methods are a class of algorithms for pattern analysis, and the most common one is the kernel SVM.

What Are Some Methods of Reducing Dimensionality?

You can reduce dimensionality by combining features with feature engineering, removing collinear features, or using algorithmic dimensionality reduction.
Now that you have gone through these machine learning interview questions, you must have got an idea of your strengths and weaknesses in this domain.

 

How is KNN different from k-means clustering?

K-Nearest Neighbors is a supervised classification algorithm, while k-means clustering is an unsupervised clustering algorithm. While the mechanisms may seem similar at first, what this really means is that in order for K-Nearest Neighbors to work, you need labeled data you want to classify an unlabeled point into (thus the nearest neighbor part). K-means clustering requires only a set of unlabeled points and a threshold: the algorithm will take unlabeled points and gradually learn how to cluster them into groups by computing the mean of the distance between different points.

What are difference between Data Mining and Machine learning?

Machine learning relates to the study, design, and development of the algorithms that give computers the capability to learn without being explicitly programmed. While data mining can be defined as the process in which the unstructured data tries to extract knowledge or unknown interesting patterns. During this processing machine, learning algorithms are used.

Machine Learning For Dummies
Machine Learning For Dummies

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is “Naive” in a Naive Bayes?

Reference: Naive Bayes Classifier on Wikipedia

Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable y and dependent feature vector X1through Xn:

Machine Learning Algorithms Naive Bayes

What is PCA (Principal Component Analysis)? When do you use it?

Reference: PCA on wikipedia

Principal component analysis (PCA) is a statistical method used in Machine Learning. It consists in projecting data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension.

The process works as following. We define a matrix A with > rows (the single observations of a dataset – in a tabular format, each single row) and @ columns, our features. For this matrix we construct a variable space with as many dimensions as there are features. Each feature represents one coordinate axis. For each feature, the length has been standardized according to a scaling criterion, normally by scaling to unit variance. It is determinant to scale the features to a common scale, otherwise the features with a greater magnitude will weigh more in determining the principal components. Once plotted all the observations and computed the mean of each variable, that mean will be represented by a point in the center of our plot (the center of gravity). Then, we subtract each observation with the mean, shifting the coordinate system with the center in the origin. The best fitting line resulting is the line that best accounts for the shape of the point swarm. It represents the maximum variance direction in the data. Each observation may be projected onto this line in order to get a coordinate value along the PC-line. This value is known as a score. The next best-fitting line can be similarly chosen from directions perpendicular to the first.
Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components.

PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.

Machine Learning For Dummies

PCA is a technique that is used for reducing the dimensionality of a dataset while still preserving as much of the variance as possible. It is commonly used in machine learning and data science, as it can help to improve the performance of models by making the data easier to work with. In order to perform PCA on a dataset, there are a few pre-processing steps that need to be undertaken.

  • First, any features that are strongly correlated with each other should be removed, as PCA will not be effective in reducing the dimensionality of the data if there are strong correlations present.
  • Next, any features that contain missing values should be imputed, as PCA cannot be performed on data that contains missing values.
  • Finally, the data should be scaled so that all features are on the same scale; this is necessary because PCA is based on the variance of the data, and if the scales of the features are different then PCA will not be able to accurately identify which features are most important in terms of variance.
  • Once these pre-processing steps have been completed, PCA can be performed on the dataset.

Principal component analysis (PCA) is a statistical technique that is used to reduce the dimensionality of a dataset. PCA is often used as a pre-processing step in machine learning and data science, as it can help to improve the performance of models. In order to perform PCA on a dataset, the data must first be scaled and centered. Scaling ensures that all of the features are on the same scale, which is important for PCA. Centering means that the mean of each feature is zero. This is also important for PCA, as PCA is sensitive to changes in the mean of the data. Once the data has been scaled and centered, PCA can be performed by computing the eigenvectors and eigenvalues of the covariance matrix. These eigenvectors and eigenvalues can then be used to transform the data into a lower-dimensional space.

SVM (Support Vector Machine)  algorithm

Reference: SVM on wikipedia

Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of supportvector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So, we
choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. The best hyper plane that divides the data is H3.

  • SVMs are helpful in text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
  • Some methods for shallow semantic parsing are based on support vector machines.
  • Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.
  • Classification of satellite data like SAR data using supervised SVM.
  • Hand-written characters can be recognized using SVM.

What are the support vectors in SVM? 

In the diagram, we see that the sketched lines mark the distance from the classifier (the hyper plane) to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin.

To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, max (0, 1 – yi(w∙ xi − b)). This function is zero if x lies on the correct side of the margin. For data on the wrong side of the margin, the function’s value is proportional to the distance from the margin. 

What are the different kernels in SVM?

There are four types of kernels in SVM.
1. LinearKernel
2. Polynomial kernel
3. Radial basis kernel
4. Sigmoid kernel

Machine Learning For Dummies
Machine Learning For Dummies

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What are the most known ensemble algorithms? 

Reference: Ensemble Algorithms

The most popular trees are: AdaBoost, Random Forest, and  eXtreme Gradient Boosting (XGBoost).

AdaBoost is best used in a dataset with low noise, when computational complexity or timeliness of results is not a main concern and when there are not enough resources for broader hyperparameter tuning due to lack of time and knowledge of the user.

Random forests should not be used when dealing with time series data or any other data where look-ahead bias should be avoided, and the order and continuity of the samples need to be ensured. This algorithm can handle noise relatively well, but more knowledge from the user is required to adequately tune the algorithm compared to AdaBoost.

The main advantages of XGBoost is its lightning speed compared to other algorithms, such as AdaBoost, and its regularization parameter that successfully reduces variance. But even aside from the regularization parameter, this algorithm leverages a learning rate (shrinkage) and subsamples from the features like random forests, which increases its ability to generalize even further. However, XGBoost is more difficult to understand, visualize and to tune compared to AdaBoost and random forests. There is a multitude of hyperparameters that can be tuned to increase performance.

What are Artificial Neural Networks?

Artificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input, so the network generates the best possible result without needing to redesign the output criteria.

Artificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.

How Are Weights Initialized in a Network?

There are two methods here: we can either initialize the weights to zero or assign them randomly.

Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.

Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.

What Is the Cost Function? 

Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions.
The most known one is the mean sum of squared errors.

Machine Learning For Dummies

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What Are Hyperparameters?

With neural networks, you’re usually working with hyperparameters once the data is formatted correctly.
A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, batches, etc.).

What Are the Different Layers on CNN?

Reference: Layers of CNN 

The Convolutional neural networks are regularized versions of multilayer perceptron (MLP). They were developed based on the working of the neurons of the animal visual cortex.

The objective of using the CNN:

The idea is that you give the computer this array of numbers and it will output numbers that describe the probability of the image being a certain class (.80 for a cat, .15 for a dog, .05 for a bird, etc.). It works similar to how our brain works. When we look at a picture of a dog, we can classify it as such if the picture has identifiable features such as paws or 4 legs. In a similar way, the computer is able to perform image classification by looking for low-level features such as edges and curves and then building up to more abstract concepts through a series of convolutional layers. The computer uses low-level features obtained at the initial levels to generate high-level features such as paws or eyes to identify the object.

There are four layers in CNN:
1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data.
2. Activation Layer (ReLU Layer) – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map. It follows each convolutional layer.
3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map. Stride = how much you slide, and you get the max of the n x n matrix
4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Machine Learning For Dummies

 

Machine Learning For Dummies

What Is Pooling on CNN, and How Does It Work?

Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.

What are Recurrent Neural Networks (RNNs)? 

Reference: RNNs

RNNs are a type of artificial neural networks designed to recognize the pattern from the sequence of data such as Time series, stock market and government agencies etc.

Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a ‘series’ type input with no predetermined size.

RNNs are designed to take a series of input with no predetermined limit on size. One could ask what’s\ the big deal, I can call a regular NN repeatedly too?

Sure can, but the ‘series’ part of the input means something. A single input item from the series is related to others and likely has an influence on its neighbors. Otherwise it’s just “many” inputs, not a “series” input (duh!).
Recurrent Neural Network remembers the past and its decisions are influenced by what it has learnt from the past. Note: Basic feed forward networks “remember” things too, but they remember things they learnt during training. For example, an image classifier learns what a “1” looks like during training and then uses that knowledge to classify things in production.
While RNNs learn similarly while training, in addition, they remember things learnt from prior input(s) while generating output(s). RNNs can take one or more input vectors and produce one or more output vectors and the output(s) are influenced not just by weights applied on inputs like a regular NN, but also by a “hidden” state vector representing the context based on prior input(s)/output(s). So, the same input could produce a different output depending on previous inputs in the series.

In summary, in a vanilla neural network, a fixed size input vector is transformed into a fixed size output vector. Such a network becomes “recurrent” when you repeatedly apply the transformations to a series of given input and produce a series of output vectors. There is no pre-set limitation to the size of the vector. And, in addition to generating the output which is a function of the input and hidden state, we update the hidden state itself based on the input and use it in processing the next input.

What is the role of the Activation Function?

The Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.

Machine Learning libraries for various purposes

What is an Auto-Encoder?

Reference: Auto-Encoder

Auto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabeled input which is then encoded to reconstruct the input. 

An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties.
Autoencoders are effectively used for solving many applied problems, from face recognition to acquiring the semantic meaning of words.

What is a Boltzmann Machine?

Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimize the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.

Machine Learning For Dummies

What Is Dropout and Batch Normalization?

Dropout is a technique of dropping out hidden and visible nodes of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network. It used to avoid overfitting, as it increases the capacity of generalization.

Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one

Why Is TensorFlow the Most Preferred Library in Deep Learning?

TensorFlow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and PyTorch. TensorFlow supports both CPU and GPU computing devices.

What is Tensor in TensorFlow?

A tensor is a mathematical object represented as arrays of higher dimensions. Think of a n-D matrix. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”

What is the Computational Graph?

Everything in a TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates. Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”

How is logistic regression done? 

Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).

Explain the steps in making a decision tree. 

1. Take the entire data set as input
2. Calculate entropy of the target variable, as well as the predictor attributes
3. Calculate your information gain of all attributes (we gain information on sorting different objects from each other)
4. Choose the attribute with the highest information gain as the root node
5. Repeat the same procedure on every branch until the decision node of each branch is finalized
For example, let’s say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:

It is clear from the decision tree that an offer is accepted if:
• Salary is greater than $50,000
• The commute is less than an hour
• Coffee is offered

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

How do you build a random forest model?

A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.

Steps to build a random forest model:

1. Randomly select ; features from a total of = features where  k<< m
2. Among the ; features, calculate the node D using the best split point
3. Split the node into daughter nodes using the best split
4. Repeat steps two and three until leaf nodes are finalized
5. Build forest by repeating steps one to four for > times to create > number of trees

Differentiate between univariate, bivariate, and multivariate analysis. 

Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.

The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.

Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.

Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.

Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.

Example: data for house price prediction
The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.

What are the feature selection methods used to select the right variables?

There are two main methods for feature selection.
Filter Methods
This involves:
• Linear discrimination analysis
• ANOVA
• Chi-Square
The best analogy for selecting features is “bad data in, bad answer out.” When we’re limiting or selecting the features, it’s all about cleaning up the data coming in.

Wrapper Methods
This involves:
• Forward Selection: We test one feature at a time and keep adding them until we get a good fit
• Backward Selection: We test all the features and start removing them to see what works
better
• Recursive Feature Elimination: Recursively looks through all the different features and how they pair together

Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.

You are given a data set consisting of variables with more than 30 percent missing values. How will you deal with them? 

If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.

For smaller data sets, we can impute missing values with the mean, median, or average of the rest of the data using pandas data frame in python. There are different ways to do so, such as: df.mean(), df.fillna(mean)

Other option of imputation is using KNN for numeric or classification values (as KNN just uses k closest values to impute the missing value).

Q76: How will you calculate the Euclidean distance in Python?

plot1 = [1,3]

plot2 = [2,5]

The Euclidean distance can be calculated as follows:

euclidean_distance = sqrt((plot1[0]-plot2[0])**2 + (plot1[1]- plot2[1])**2)

What are dimensionality reduction and its benefits? 

Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.

This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there’s no point in storing a value in two different units (meters and inches).

How should you maintain a deployed model?

The steps to maintain a deployed model are (CREM):

1. Monitor: constant monitoring of all models is needed to determine their performance accuracy.
When you change something, you want to figure out how your changes are going to affect things.
This needs to be monitored to ensure it’s doing what it’s supposed to do.
2. Evaluate: evaluation metrics of the current model are calculated to determine if a new algorithm is needed.
3. Compare: the new models are compared to each other to determine which model performs the best.
4. Rebuild: the best performing model is re-built on the current state of data.

How can a time-series data be declared as stationery?

  1. The mean of the series should not be a function of time.

  1. The variance of the series should not be a function of time. This property is known as homoscedasticity.

  1. The covariance of the i th term and the (i+m) th term should not be a function of time.

‘People who bought this also bought…’ recommendations seen on Amazon are a result of which algorithm?

The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc.
The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown.
For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.

What is a Generative Adversarial Network?

Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine. The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic.
The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.

• There is a noise vector coming into the forger who is generating fake wine.
• Here the forger acts as a Generator.
• The shop owner acts as a Discriminator.
• The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine.
The shop owner has to figure out whether it is real or fake.

So, there are two primary components of Generative Adversarial Network (GAN) named:
1. Generator
2. Discriminator

The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images. The ultimate aim is to make the discriminator learn to identify real and fake images.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

You are given a dataset on cancer detection. You have built a classification model and achieved an accuracy of 96 percent. Why shouldn’t you be happy with your model performance? What can you do about it?

Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection and can greatly improve a patient’s prognosis.

Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.

We want to predict the probability of death from heart disease based on three risk factors: age, gender, and blood cholesterol level. What is the most appropriate algorithm for this case?

The most appropriate algorithm for this case is logistic regression.

After studying the behavior of a population, you have identified four specific individual types that are valuable to your study. You would like to find all users who are most similar to each individual type. Which algorithm is most appropriate for this study? 

As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering is the most appropriate algorithm for this study.

You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be relevant. What else must be true? 

{grape, apple} must be a frequent itemset.

Your organization has a website where visitors randomly receive one of two coupons. It is also possible that visitors to the website will not receive a coupon. You have been asked to determine if offering a coupon to website visitors has any impact on their purchase decisions. Which analysis method should you use?

One-way ANOVA: in statistics, one-way analysis of variance is a technique that can be used to compare means of two or more samples. This technique can be used only for numerical response data, the “Y”, usually one variable, and numerical or categorical input data, the “X”, always one variable, hence “oneway”.
The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit
theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.

What are the feature vectors?

A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that’s easy to analyze.

What is root cause analysis?

Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.

Do gradient descent methods always converge to similar points?

They do not, because in some cases, they reach a local minimum or a local optimum point. You would not reach the global optimum point. This is governed by the data and the starting conditions.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What are the different Deep Learning Frameworks?

PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.
TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team.
Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.

What are the different Deep Learning Frameworks?

PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license.
TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team.
Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph.
Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.

How Does an LSTM Network Work?

Reference: LTSM

Long-Short-Term Memory (LSTM) is a special kind of recurrent neural network capable of learning long-term dependencies, remembering information for long periods as its default behavior. There are three steps in an LSTM network:
• Step 1: The network decides what to forget and what to remember.
• Step 2: It selectively updates cell state values.
• Step 3: The network decides what part of the current state makes it to the output.

What Is a Multi-layer Perceptron (MLP)?

Reference: MLP

As in Neural Networks, MLPs have an input layer, a hidden layer, and an output layer. It has the same structure as a single layer perceptron with one or more hidden layers.

Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks.
A (single layer) perceptron is a single layer neural network that works as a linear binary classifier. Being a single layer neural network, it can be trained without the use of more advanced algorithms like back propagation and instead can be trained by “stepping towards” your error in steps specified by a learning rate. When someone says perceptron, I usually think of the single layer version.

 

Machine Learning Multi-Layer Perceptron

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is exploding gradients? 

https://machinelearningmastery.com/exploding-gradients-in-neural-networks/

While training an RNN, if you see exponentially growing (very large) error gradients which accumulate and result in very large updates to neural network model weights during training, they’re known as exploding gradients. At an extreme, the values of weights can become so large as to overflow and result in NaN values. The explosion occurs through exponential growth by repeatedly multiplying gradients through the network layers that have values larger than 1.0.
This has the effect of your model is unstable and unable to learn from your training data.
There are some subtle signs that you may be suffering from exploding gradients during the training of your network, such as:
• The model is unable to get traction on your training data (e.g. poor loss).
• The model is unstable, resulting in large changes in loss from update to update.
• The model loss goes to NaN during training.
• The model weights quickly become very large during training.
• The error gradient values are consistently above 1.0 for each node and layer during training.

Solutions
1. Re-Design the Network Model:
a. In deep neural networks, exploding gradients may be addressed by redesigning the
network to have fewer layers. There may also be some benefit in using a smaller batch
size while training the network.
b. In RNNs, updating across fewer prior time steps during training, called truncated
Backpropagation through time, may reduce the exploding gradient problem.

2. Use Long Short-Term Memory Networks: In RNNs, exploding gradients can be reduced by using the Long Short-Term Memory (LSTM) memory units and perhaps related gated-type neuron structures. Adopting LSTM memory units is a new best practice for recurrent neural networks for sequence prediction.

3. Use Gradient Clipping: Exploding gradients can still occur in very deep Multilayer Perceptron networks with a large batch size and LSTMs with very long input sequence lengths. If exploding gradients are still occurring, you can check for and limit the size of gradients during the training of your network. This is called gradient clipping. Specifically, the values of the error gradient are checked against a threshold value and clipped or set to that threshold value if the error gradient exceeds the threshold.

4. Use Weight Regularization: another approach, if exploding gradients are still occurring, is to check the size of network weights and apply a penalty to the networks loss function for large weight values. This is called weight regularization and often an L1 (absolute weights) or an L2 (squared weights) penalty can be used.

Machine Learning For Dummies

What is vanishing gradients? 

While training an RNN, your slope can become either too small; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. It leads to long training times, poor performance, and low accuracy.
• Hyperbolic tangent and Sigmoid/Soft-max suffer vanishing gradient.
• RNNs suffer vanishing gradient, LSTM no (so it is perfect to predict stock prices). In fact, the propagation of error through previous layers makes the gradient get smaller so the weights are not updated.

Solutions
1. Choose RELU
2. Use LSTM (for RNNs)
3. Use ResNet (Residual Network) → after some layers, add x again: F(x) → ⋯ → F(x) + x
4. Multi-level hierarchy: pre-train one layer at the time through unsupervised learning, then fine-tune via backpropagation
5. Gradient checking: debugging strategy used to numerically track and assess gradients during training.

What is Gradient Descent?

Let’s first explain what a gradient is. A gradient is a mathematical function. When calculated on a point of a function, it gives the hyperplane (or slope) of the directions in which the function increases more. The gradient vector can be interpreted as the “direction and rate of fastest increase”. If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction.
Further, the gradient is the zero vector at a point if and only if it is a stationary point (where the derivative vanishes).
In Data Science, it simply measures the change in all weights with regard to the change in error, as we are partially derivating by w the loss function.

Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function.

 

Machine Learning Gradient Descent

The goal of the gradient descent is to minimize a given function which, in our case, is the loss function of the neural network. To achieve this goal, it performs two steps iteratively.
1. Compute the slope (gradient) that is the first-order derivative of the function at the current point
2. Move-in the opposite direction of the slope increase from the current point by the computed amount
So, the idea is to pass the training set through the hidden layers of the neural network and then update the parameters of the layers by computing the gradients using the training samples from the training dataset.
Think of it like this. Suppose a man is at top of the valley and he wants to get to the bottom of the valley.
So, he goes down the slope. He decides his next position based on his current position and stops when he gets to the bottom of the valley which was his goal.

• Gradient descent is an iterative optimization algorithm that is popular and it is a base for many other optimization techniques, which tries to obtain minimal loss in a model by tuning the weights/parameters in the objective function.

• Types of Gradient Descent:

  1. Batch Gradient Descent
  2. Stochastic Gradient Descent
  3. Mini Batch Gradient Descent

• Steps to achieve minimal loss:

  1. The first stage in gradient descent is to pick a starting value (a starting point) for w1, which is set to 0 by many algorithms.
  2. The gradient descent algorithm then calculates the gradient of the loss curve at the starting point.
  3. The gradient always points in the direction of steepest increase in the loss function. The gradient descent algorithm takes a step in the direction of the negative gradient in order to reduce loss as quickly as possible.
  4. To determine the next point along the loss function curve, the gradient descent algorithm adds some fraction of the gradient’s magnitude to the starting point and moves forward.
  5. The gradient descent then repeats this process, edging ever closer to the minimum.

What is vanishing gradients? 

While training an RNN, your slope can become either too small; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. It leads to long training times, poor performance, and low accuracy.
• Hyperbolic tangent and Sigmoid/Soft-max suffer vanishing gradient.
• RNNs suffer vanishing gradient, LSTM no (so it is perfect to predict stock prices). In fact, the propagation of error through previous layers makes the gradient get smaller so the weights are not updated.

Solutions
1. Choose RELU
2. Use LSTM (for RNNs)
3. Use ResNet (Residual Network) → after some layers, add x again: F(x) → ⋯ → F(x) + x
4. Multi-level hierarchy: pre-train one layer at the time through unsupervised learning, then fine-tune via backpropagation
5. Gradient checking: debugging strategy used to numerically track and assess gradients during training.

What is Back Propagation and Explain it Works. 

Back propagation is a training algorithm used for neural network. In this method, we update the weights of each layer from the last layer recursively, with the formula:

Machine Learning Back Propagation Formula

It has the following steps:
• Forward Propagation of Training Data (initializing weights with random or pre-assigned values)
• Gradients are computed using output weights and target
• Back Propagate for computing gradients of error from output activation
• Update the Weights

What are the variants of Back Propagation? 

Reference: Variants of back propagation

  • Stochastic Gradient Descent: In Batch Gradient Descent we were considering all the examples for every step of Gradient Descent. But what if our dataset is very huge. Deep learning models crave for data. The more the data the more chances of a model to be good. Suppose our dataset has 5 million examples, then just to take one step the model will have to calculate the gradients of all the 5 million examples. This does not seem an efficient way. To tackle this problem, we have Stochastic Gradient Descent. In Stochastic Gradient Descent (SGD), we consider just one example at a time to take a single step. We do the following steps in one epoch for SGD:
    1. Take an example
    2. Feed it to Neural Network
    3. Calculate its gradient
    4. Use the gradient we calculated in step 3 to update the weights
    5. Repeat steps 1–4 for all the examples in training dataset
    Since we are considering just one example at a time the cost will fluctuate over the training examples and it will not necessarily decrease. But in the long run, you will see the cost decreasing with fluctuations. Also, because the cost is so fluctuating, it will never reach the minimum, but it will keep dancing around it. SGD can be used for larger datasets. It converges faster when the dataset is large as it causes updates to the parameters more frequently.

 Stochastic Gradient Descent (SGD)

 

Stochastic Gradient Descent (SGD)

  • Batch Gradient Descent: all the training data is taken into consideration to take a single step. We take the average of the gradients of all the training examples and then use that mean gradient to update our parameters. So that’s just one step of gradient descent in one epoch. Batch Gradient Descent is great for convex or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution. The graph of cost vs epochs is also quite smooth because we are averaging over all the gradients of training data for a single step. The cost keeps on decreasing over the epochs.

 

Batch Gradient Descent

  • Mini-batch Gradient Descent: It’s one of the most popular optimization algorithms. It’s a variant of Stochastic Gradient Descent and here instead of single training example, mini batch of samples is used. Batch Gradient Descent can be used for smoother curves. SGD can be used when the dataset is large. Batch Gradient Descent converges directly to minima. SGD converges faster for larger datasets.
    But, since in SGD we use only one example at a time, we cannot implement the vectorized implementation on it. This can slow down the computations. To tackle this problem, a mixture of Batch Gradient Descent and SGD is used. Neither we use all the dataset all at once nor we use the single example at a time. We use a batch of a fixed number of training examples which is less than the actual dataset and call it a mini-batch. Doing this helps us achieve the advantages of both the former variants we saw. So, after creating the mini-batches of fixed size, we do the following steps in one epoch:
    1. Pick a mini-batch
    2. Feed it to Neural Network
    3. Calculate the mean gradient of the mini-batch
    4. Use the mean gradient we calculated in step 3 to update the weights
    5. Repeat steps 1–4 for the mini-batches we created
    Just like SGD, the average cost over the epochs in mini-batch gradient descent fluctuates because we are averaging a small number of examples at a time. So, when we are using the mini-batch gradient descent we are updating our parameters frequently as well as we can use vectorized implementation for faster computations.

While we continue to integrate ML systems in high-stakes environments such as medical settings, roads, command control centers, we need to ensure they do not cause the loss of life. How can you handle this?

By focusing on the following, which includes everything outside of just developing SOTA models, as well inclusion of key stakeholders.

🔹Robustness: Create models that are resilient to adversaries, unusual situations, and Black Swan events

🔹Monitoring: Detect malicious use, monitor predictions, and discover unexpected model functionality

🔹Alignment: Build models that represent and safely optimize hard-to-specify human values

🔹External Safety: Use ML to address risks to how ML systems are handled, such as cyber attacks

Machine Learning Unsolved Problems_ n_Safety

Download Unsolved Problems in ML Safety Here

You are given a data set. The data set has missing values that spread along 1 standard deviation from the median. What percentage of data would remain unaffected? Why?

Since the data is spread across the median, let’s assume it’s a normal distribution. We know, in a normal distribution, ~68% of the data lies in 1 standard deviation from mean (or mode, median), which leaves ~32% of the data unaffected. Therefore, ~32% of the data would remain unaffected by missing values.

What are PCA, KPCA, and ICA used for?

PCA (Principal Components Analysis), KPCA ( Kernel-based Principal Component Analysis) and ICA ( Independent Component Analysis) are important feature extraction techniques used for dimensionality reduction.

What is the bias-variance decomposition of classification error in the ensemble method?

The expected error of a learning algorithm can be decomposed into bias and variance. A bias term measures how closely the average classifier produced by the learning algorithm matches the target function. The variance term measures how much the learning algorithm’s prediction fluctuates for different training sets.

When is Ridge regression favorable over Lasso regression?

You can quote ISLR’s authors Hastie, Tibshirani who asserted that, in the presence of few variables with medium / large sized effect, use lasso regression. In presence of many variables with small/medium-sized effects, use ridge regression.
Conceptually, we can say, lasso regression (L1) does both variable selection and parameter shrinkage, whereas Ridge regression only does parameter shrinkage and end up including all the coefficients in the model. In the presence of correlated variables, ridge regression might be the preferred choice. Also, ridge regression works best in situations where the least square estimates have higher variance. Therefore, it depends on our model objective.

You’ve built a random forest model with 10000 trees. You got delighted after getting training error as 0.00. But, the validation error is 34.23. What is going on? Haven’t you trained your model perfectly?

The model has overfitted. Training error 0.00 means the classifier has mimicked the training data patterns to an extent, that they are not available in the unseen data. Hence, when this classifier was run on an unseen sample, it couldn’t find those patterns and returned predictions with higher error. In a random forest, it happens when we use a larger number of trees than necessary. Hence, to avoid this situation, we should tune the number of trees using cross-validation.

What is a convex hull?

In the case of linearly separable data, the convex hull represents the outer boundaries of the two groups of data points. Once the convex hull is created, we get maximum margin hyperplane (MMH) as a perpendicular bisector between two convex hulls. MMH is the line which attempts to create the greatest separation between two groups.

What do you understand by Type I vs Type II error?

Type I error is committed when the null hypothesis is true and we reject it, also known as a ‘False Positive’. Type II error is committed when the null hypothesis is false and we accept it, also known as ‘False Negative’.
In the context of the confusion matrix, we can say Type I error occurs when we classify a value as positive (1) when it is actually negative (0). Type II error occurs when we classify a value as negative (0) when it is actually positive(1).

In k-means or kNN, we use euclidean distance to calculate the distance between nearest neighbors. Why not manhattan distance?

We don’t use manhattan distance because it calculates distance horizontally or vertically only. It has dimension restrictions. On the other hand, the euclidean metric can be used in any space to calculate distance. Since the data points can be present in any dimension, euclidean distance is a more viable option.

Example: Think of a chessboard, the movement made by a bishop or a rook is calculated by manhattan distance because of their respective vertical & horizontal movements.

Do you suggest that treating a categorical variable as a continuous variable would result in a better predictive model?

For better predictions, the categorical variable can be considered as a continuous variable only when the variable is ordinal in nature.

OLS is to linear regression what the maximum likelihood is logistic regression. Explain the statement.

OLS and Maximum likelihood are the methods used by the respective regression methods to approximate the unknown parameter (coefficient) value. In simple words, Ordinary least square(OLS) is a method used in linear regression which approximates the parameters resulting in minimum distance between actual and predicted values. Maximum
Likelihood helps in choosing the values of parameters which maximizes the likelihood that the parameters are most likely to produce observed data.

When does regularization becomes necessary in Machine Learning?

Regularization becomes necessary when the model begins to overfit/underfit. This technique introduces a cost term for bringing in more features with the objective function. Hence, it tries to push the coefficients for many variables to zero and hence reduce the cost term. This helps to reduce model complexity so that the model can become better at predicting (generalizing).

What is Linear Regression?

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Linear Regression is a supervised Machine Learning algorithm. It is used to find the linear relationship between the dependent and the independent variables for predictive analysis.

• Linear regression assumes that the relationship between the features and the target vector is approximately linear. That is, the effect of the features on the target vector is constant.

• In linear regression, the target variable y is assumed to follow a linear function of one or more predictor variables plus some random error. The machine learning task is to estimate the parameters of this equation which can be achieved in two ways:

• The first approach is through the lens of minimizing loss. A common practice in machine learning is to choose a loss function that defines how well a model with a given set of parameters estimates the observed data. The most common loss function for linear regression is squared error loss.

• The second approach is through the lens of maximizing the likelihood. Another common practice in machine learning is to model the target as a random variable whose distribution depends on one or more parameters, and then find the parameters that maximize its likelihood.

No alternative text description for this image

Credit: Vikram K.

image

What is the Variance Inflation Factor?

Variance Inflation Factor (VIF) is the estimate of the volume of multicollinearity in a collection of many regression variables.
VIF = Variance of the model / Variance of the model with a single independent variable
We have to calculate this ratio for every independent variable. If VIF is high, then it shows the high collinearity of the independent variables.

We know that one hot encoding increases the dimensionality of a dataset, but label encoding doesn’t. How?

When we use one-hot encoding, there is an increase in the dimensionality of a dataset. The reason for the increase in dimensionality is that, for every class in the categorical variables, it forms a different variable.

What is a Decision Tree?

A decision tree is used to explain the sequence of actions that must be performed to get the desired output. It is a hierarchical diagram that shows the actions.

What is the Binarizing of data? How to Binarize?

In most of the Machine Learning Interviews, apart from theoretical questions, interviewers focus on the implementation part. So, this ML Interview Questions focused on the implementation of the theoretical concepts.
Converting data into binary values on the basis of threshold values is known as the binarizing of data. The values that are less than the threshold are set to 0 and the values that are greater than the threshold are set to 1.
This process is useful when we have to perform feature engineering, and we can also use it for adding unique features.

Machine Learning For Dummies
Machine Learning For Dummies

What is cross-validation?

Cross-validation is essentially a technique used to assess how well a model performs on a new independent dataset. The simplest example of cross-validation is when you split your data into two groups: training data and testing data, where you use the training data to build the model and the testing data to test the model.

• Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation.

• Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

• It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

• Procedure for K-Fold Cross Validation:
1. Shuffle the dataset randomly.
2. Split the dataset into k groups

3. For each unique group:
a. Take the group as a holdout or test data set
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model

4. Summarize the skill of the model using the sample of model evaluation scores

No alternative text description for this image

Credit: Vikram K.

When would you use random forests Vs SVM and why?

There are a couple of reasons why a random forest is a better choice of the model than a support vector machine:
● Random forests allow you to determine the feature importance. SVM’s can’t do this.
● Random forests are much quicker and simpler to build than an SVM.
● For multi-class classification problems, SVMs require a one-vs-rest method, which is less scalable and more memory intensive.

What are the drawbacks of a linear model?

There are a couple of drawbacks of a linear model:
● A linear model holds some strong assumptions that may not be true in the application. It assumes a linear relationship, multivariate normality, no or little multicollinearity, no auto-correlation, and homoscedasticity
● A linear model can’t be used for discrete or binary outcomes.
● You can’t vary the model flexibility of a linear model.

 

While we continue to integrate ML systems in high-stakes environments such as medical settings, roads, command control centers, we need to ensure they do not cause the loss of life. How can you handle this?

By focusing on the following, which includes everything outside of just developing SOTA models, as well inclusion of key stakeholders.

🔹Robustness: Create models that are resilient to adversaries, unusual situations, and Black Swan events

🔹Monitoring: Detect malicious use, monitor predictions, and discover unexpected model functionality

🔹Alignment: Build models that represent and safely optimize hard-to-specify human values

🔹External Safety: Use ML to address risks to how ML systems are handled, such as cyber attacks

Machine Learning Unsolved Problems_ n_Safety

Download Unsolved Problems in ML Safety Here

You are given a data set. The data set has missing values that spread along 1 standard deviation from the median. What percentage of data would remain unaffected? Why?

Since the data is spread across the median, let’s assume it’s a normal distribution. We know, in a normal distribution, ~68% of the data lies in 1 standard deviation from mean (or mode, median), which leaves ~32% of the data unaffected. Therefore, ~32% of the data would remain unaffected by missing values.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Machine Learning For Dummies
Machine Learning For Dummies

What are PCA, KPCA, and ICA used for?

PCA (Principal Components Analysis), KPCA ( Kernel-based Principal Component Analysis) and ICA ( Independent Component Analysis) are important feature extraction techniques used for dimensionality reduction.

What are support vector machines?

 

Support vector machines are supervised learning algorithms used for classification and regression analysis.

What is batch statistical learning?

Statistical learning techniques allow learning a function or predictor from a set of observed data that can make predictions about unseen or future data. These techniques provide guarantees on the performance of the learned predictor on the future unseen data based on a statistical assumption on the data generating process.

What is the bias-variance decomposition of classification error in the ensemble method?

The expected error of a learning algorithm can be decomposed into bias and variance. A bias term measures how closely the average classifier produced by the learning algorithm matches the target function. The variance term measures how much the learning algorithm’s prediction fluctuates for different training sets.

When is Ridge regression favorable over Lasso regression?

You can quote ISLR’s authors Hastie, Tibshirani who asserted that, in the presence of few variables with medium / large sized effect, use lasso regression. In presence of many variables with small/medium-sized effects, use ridge regression.
Conceptually, we can say, lasso regression (L1) does both variable selection and parameter shrinkage, whereas Ridge regression only does parameter shrinkage and end up including all the coefficients in the model. In the presence of correlated variables, ridge regression might be the preferred choice. Also, ridge regression works best in situations where the least square estimates have higher variance. Therefore, it depends on our model objective.

You’ve built a random forest model with 10000 trees. You got delighted after getting training error as 0.00. But, the validation error is 34.23. What is going on? Haven’t you trained your model perfectly?

The model has overfitted. Training error 0.00 means the classifier has mimicked the training data patterns to an extent, that they are not available in the unseen data. Hence, when this classifier was run on an unseen sample, it couldn’t find those patterns and returned predictions with higher error. In a random forest, it happens when we use a larger number of trees than necessary. Hence, to avoid this situation, we should tune the number of trees using cross-validation.

What is a convex hull?

In the case of linearly separable data, the convex hull represents the outer boundaries of the two groups of data points. Once the convex hull is created, we get maximum margin hyperplane (MMH) as a perpendicular bisector between two convex hulls. MMH is the line which attempts to create the greatest separation between two groups.

What do you understand by Type I vs Type II error?

Type I error is committed when the null hypothesis is true and we reject it, also known as a ‘False Positive’. Type II error is committed when the null hypothesis is false and we accept it, also known as ‘False Negative’.
In the context of the confusion matrix, we can say Type I error occurs when we classify a value as positive (1) when it is actually negative (0). Type II error occurs when we classify a value as negative (0) when it is actually positive(1).

In k-means or kNN, we use euclidean distance to calculate the distance between nearest neighbors. Why not manhattan distance?

We don’t use manhattan distance because it calculates distance horizontally or vertically only. It has dimension restrictions. On the other hand, the euclidean metric can be used in any space to calculate distance. Since the data points can be present in any dimension, euclidean distance is a more viable option.

Example: Think of a chessboard, the movement made by a bishop or a rook is calculated by manhattan distance because of their respective vertical & horizontal movements.

Do you suggest that treating a categorical variable as a continuous variable would result in a better predictive model?

For better predictions, the categorical variable can be considered as a continuous variable only when the variable is ordinal in nature.

OLS is to linear regression wha the maximum likelihood is logistic regression. Explain the statement.

OLS and Maximum likelihood are the methods used by the respective regression methods to approximate the unknown parameter (coefficient) value. In simple words, Ordinary least square(OLS) is a method used in linear regression which approximates the parameters resulting in minimum distance between actual and predicted values. Maximum
Likelihood helps in choosing the values of parameters which maximizes the likelihood that the parameters are most likely to produce observed data.

When does regularization becomes necessary in Machine Learning?

Regularization becomes necessary when the model begins to overfit/underfit. This technique introduces a cost term for bringing in more features with the objective function. Hence, it tries to push the coefficients for many variables to zero and hence reduce the cost term. This helps to reduce model complexity so that the model can become better at predicting (generalizing).

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is Linear Regression?

Linear Regression is a supervised Machine Learning algorithm. It is used to find the linear relationship between the dependent and the independent variables for predictive analysis.

What is the Variance Inflation Factor?

Variance Inflation Factor (VIF) is the estimate of the volume of multicollinearity in a collection of many regression variables.
VIF = Variance of the model / Variance of the model with a single independent variable
We have to calculate this ratio for every independent variable. If VIF is high, then it shows the high collinearity of the independent variables.

We know that one hot encoding increases the dimensionality of a dataset, but label encoding doesn’t. How?

When we use one-hot encoding, there is an increase in the dimensionality of a dataset. The reason for the increase in dimensionality is that, for every class in the categorical variables, it forms a different variable.

What is a Decision Tree?

A decision tree is used to explain the sequence of actions that must be performed to get the desired output. It is a hierarchical diagram that shows the actions.

What is the Binarizing of data? How to Binarize?

In most of the Machine Learning Interviews, apart from theoretical questions, interviewers focus on the implementation part. So, this ML Interview Questions focused on the implementation of the theoretical concepts.
Converting data into binary values on the basis of threshold values is known as the binarizing of data. The values that are less than the threshold are set to 0 and the values that are greater than the threshold are set to 1.
This process is useful when we have to perform feature engineering, and we can also use it for adding unique features.

What is cross-validation?

Cross-validation is essentially a technique used to assess how well a model performs on a new independent dataset. The simplest example of cross-validation is when you split your data into two groups: training data and testing data, where you use the training data to build the model and the testing data to test the model.

When would you use random forests Vs SVM and why?

There are a couple of reasons why a random forest is a better choice of the model than a support vector machine:
● Random forests allow you to determine the feature importance. SVM’s can’t do this.
● Random forests are much quicker and simpler to build than an SVM.
● For multi-class classification problems, SVMs require a one-vs-rest method, which is less scalable and more memory intensive.

What are the drawbacks of a linear model?

There are a couple of drawbacks of a linear model:
● A linear model holds some strong assumptions that may not be true in the application. It assumes a linear relationship, multivariate normality, no or little multicollinearity, no auto-correlation, and homoscedasticity
● A linear model can’t be used for discrete or binary outcomes.
● You can’t vary the model flexibility of a linear model.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Do you think 50 small decision trees are better than a large one? Why?

Another way of asking this question is “Is a random forest a better model than a decision tree?”
And the answer is yes because a random forest is an ensemble method that takes many weak decision trees to make a strong learner. Random forests are more accurate, more robust, and less prone to overfitting. 

What is a kernel? Explain the kernel trick

A kernel is a way of computing the dot product of two vectors x and ᫣y in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called “generalized dot product”
The kernel trick is a method of using a linear classifier to solve a non-linear problem by transforming linearly inseparable data to linearly separable ones in a higher dimension.

State the differences between causality and correlation?

Causality applies to situations where one action, say X, causes an outcome, say Y, whereas Correlation is just relating one action (X) to another action(Y) but X does not necessarily cause Y.

What is the exploding gradient problem while using the backpropagation technique?

When large error gradients accumulate and result in large changes in the neural network weights during training, it is called the exploding gradient problem. The values of weights can become so large as to overflow and result in NaN values. This makes the model unstable and the learning of the model to stall just like the vanishing gradient problem.

What do you mean by Associative Rule Mining (ARM)?

Associative Rule Mining is one of the techniques to discover patterns in data like features (dimensions) which occur together and features (dimensions) which are correlated.

What is Marginalization? Explain the process.

Marginalization is summing the probability of a random variable X given the joint probability distribution of X with other variables. It is an application of the law of total probability.

Why is the rotation of components so important in Principle Component Analysis(PCA)?

Rotation in PCA is very important as it maximizes the separation within the variance obtained by all the components because of which interpretation of components would become easier. If the components are not rotated, then we need extended components to describe the variance of the components.

What is the difference between regularization and normalization?

Normalization adjusts the data; regularization adjusts the prediction function. If your data is on very different scales (especially low to high), you would want to normalize the data. Alter each column to have compatible basic statistics. This can be helpful to make sure there is no loss of accuracy. One of the goals of model training is to identify the signal and ignore the noise if the model is given free rein to minimize error, there is a possibility of suffering from overfitting.
Regularization imposes some control on this by providing simpler fitting functions over complex ones.

How does the SVM algorithm deal with self-learning?

SVM has a learning rate and expansion rate which takes care of this. The learning rate compensates or penalizes the hyperplanes for making all the wrong moves and expansion rate deals with finding the maximum separation area between classes.

How do you handle outliers in the data?

Outlier is an observation in the data set that is far away from other observations in the data set.
We can discover outliers using tools and functions like box plot, scatter plot, Z-Score, IQR score etc. and then handle them based on the visualization we have got. To handle outliers, we can cap at some threshold, use transformations to reduce skewness of the data and remove outliers if they are anomalies or errors.

What are some techniques used to find similarities in the recommendation system?

 

Pearson correlation and Cosine correlation are techniques used to find similarities in recommendation systems.

Why would you Prune your tree?

In the context of data science or AIML, pruning refers to the process of reducing redundant branches of a decision tree. Decision Trees are prone to overfitting, pruning the tree helps to reduce the size and minimizes the chances of overfitting. Pruning involves turning branches of a decision tree into leaf nodes and removing the leaf nodes from the original branch. It serves as a tool to perform the tradeoff.

What are some of the EDA Techniques?

Exploratory Data Analysis (EDA) helps analysts to understand the data better and forms the foundation of better models.
Visualization
● Univariate visualization
● Bivariate visualization
● Multivariate visualization
Missing Value Treatment – Replace missing values with Either Mean/Median Outlier Detection – Use Boxplot to identify the distribution of Outliers, then Apply IQR to set the boundary for IQR

What is data augmentation?

 

Data augmentation is a technique for synthesizing new data by modifying existing data in such a way that the target is not changed, or it is changed in a known way.
CV is one of the fields where data augmentation is very useful. There are many modifications that we can do to images:
● Resize
● Horizontal or vertical flip
● Rotate
● Add noise
● Deform
● Modify colors
Each problem needs a customized data augmentation pipeline. For example, on OCR, doing flips will change the text and won’t be beneficial; however, resizes and small rotations may help.

What is Inductive Logic Programming in Machine Learning (ILP)?

Inductive Logic Programming (ILP) is a subfield of machine learning which uses logic programming representing background knowledge and examples.

What is the difference between inductive machine learning and deductive machine learning?

The difference between inductive machine learning and deductive machine learning are as follows: machine-learning where the model learns by examples from a set of observed instances to draw a generalized conclusion whereas in deductive learning the model first draws the conclusion and then the conclusion is drawn.

What is the Difference between machine learning and deep learning?

 

Machine learning is a branch of computer science and a method to implement artificial intelligence. This technique provides the ability to automatically learn and improve from experiences without being explicitly programmed.
Deep learning can be said as a subset of machine learning. It is mainly based on the artificial neural network where data is taken as an input and the technique makes intuitive decisions using the artificial neural network.

What Are The Steps Involved In Machine Learning Project?

As you plan for doing a machine learning project. There are several important steps you must follow to achieve a good working model and they are data collection, data preparation, choosing a machine learning model, training the model, model evaluation, parameter tuning and lastly prediction.

What are Differences between Artificial Intelligence and Machine Learning?

Artificial intelligence is a broader prospect than machine learning. Artificial intelligence mimics the cognitive functions of the human brain. The purpose of AI is to carry out a task in an intelligent manner based on algorithms. On the other hand, machine learning is a subclass of artificial intelligence. To develop an autonomous machine in such a way so that it can learn without being explicitly programmed is the goal of machine learning.

What are the steps Needed to choose the Appropriate Machine Learning Algorithm for your Classification problem?

Firstly, you need to have a clear picture of your data, your constraints, and your problems before heading towards different machine learning algorithms. Secondly, you have to understand which type and kind of data you have because it plays a primary role in deciding which algorithm you have to use.

Following this step is the data categorization step, which is a two-step process – categorization by input and categorization by output. The next step is to understand your constraints; that is, what is your data storage capacity? How fast the prediction has to be? etc.

Finally, find the available machine learning algorithms and implement them wisely. Along with that, also try to optimize the hyperparameters which can be done in three ways – grid search, random search, and Bayesian optimization.

What is the Convex Function?

A convex function is a continuous function, and the value of the midpoint at every interval in its given domain is less than the numerical mean of the values at the two ends of the interval.

What’s the Relationship between True Positive Rate and Recall?

The True positive rate in machine learning is the percentage of the positives that have been properly acknowledged, and recall is just the count of the results that have been correctly identified and are relevant. Therefore, they are the same things, just having different names. It is also known as sensitivity.

What are some tools for parallelizing Machine Learning Algorithms?

Almost all machine learning algorithms are easy to serialize. Some of the basic tools for parallelizing are Matlab, Weka, R, Octave, or the Python-based sci-kit learn.

What is meant by Genetic Programming?

Genetic Programming (GP) is almost similar to an Evolutionary Algorithm, a subset of machine learning. Genetic programming software systems implement an algorithm that uses random mutation, a fitness function, crossover, and multiple generations of evolution to resolve a user-defined task. The genetic programming model is based on testing and choosing the best option among a set of results.

What is meant by Bayesian Networks?

Bayesian Networks also referred to as ‘belief networks’ or ‘casual networks’, are used to represent the graphical model for probability relationship among a set of variables.
For example, a Bayesian network can be used to represent the probabilistic relationships between diseases and symptoms. As per the symptoms, the network can also compute the probabilities of the presence of various diseases.
Efficient algorithms can perform inference or learning in Bayesian networks. Bayesian networks which relate the variables (e.g., speech signals or protein sequences) are called dynamic Bayesian networks.

 

Which are the two components of the Bayesian logic program?

A Bayesian logic program consists of two components:
● Logical It contains a set of Bayesian Clauses, which capture the qualitative structure of the domain.
● Quantitative It is used to encode quantitative information about the domain.

How is machine learning used in day-to-day life?

Most of the people are already using machine learning in their everyday life. Assume that you are engaging with the internet, you are actually expressing your preferences, likes, dislikes through your searches. All these things are picked up by cookies coming on your computer, from this, the behavior of a user is evaluated. It helps to increase the progress of a user through the internet and provide similar suggestions.
The navigation system can also be considered as one of the examples where we are using machine learning to calculate a distance between two places using optimization techniques.

What is Sampling. Why do we need it?

Sampling is a process of choosing a subset from a target population that would serve as its representative. We use the data from the sample to understand the pattern in the community as a whole. Sampling is necessary because often, we can not gather or process the complete data within a reasonable time.

What does the term decision boundary mean?

A decision boundary or a decision surface is a hypersurface which divides the underlying feature space into two subspaces, one for each class. If the decision boundary is a hyperplane, then the classes are linearly separable.

Define entropy?

Entropy is the measure of uncertainty associated with random variable Y. It is the expected number of bits required to communicate the value of the variable.

Indicate the top intents of machine learning?

The top intents of machine learning are stated below,
● The system gets information from the already established computations to give well-founded decisions and outputs.
● It locates certain patterns in the data and then makes certain predictions on it to provide answers on matters.

Highlight the differences between the Generative model and the Discriminative model?

The aim of the Generative model is to generate new samples from the same distribution and new data instances, Whereas, the Discriminative model highlights the differences between different kinds of data instances. It tries to learn directly from the data and then classifies the data.

Identify the most important aptitudes of a machine learning engineer?

Machine learning allows the computer to learn itself without being decidedly programmed. It helps the system to learn from experience and then improve from its mistakes. The intelligence system, which is based on machine learning, can learn from recorded data and past incidents.
In-depth knowledge of statistics, probability, data modelling, programming language, as well as CS, Application of ML Libraries and algorithms, and software design is required to become a successful machine learning engineer.

What is feature engineering? How do you apply it in the process of modelling?

Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data.

How can learning curves help create a better model?

Learning curves give the indication of the presence of overfitting or underfitting. In a learning curve, the training error and cross-validating error are plotted against the number of training data points.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Perception: Vision, Audio, Speech, Natural Language

NLP: TF-IDF helps you to establish what?

TFIDF helps to establish how important a particular word is in the context of the document corpus. TF-IDF takes into account the number of times the word appears in the document and offset by the number of documents that appear in the corpus.
– TF is the frequency of term divided by a total number of terms in the document.
– IDF is obtained by dividing the total number of documents by the number of documents containing the term and then taking the logarithm of that quotient.
– Tf.idf is then the multiplication of two values TF and IDF
 

List 10 use cases to be solved using NLP techniques?

● Sentiment Analysis
● Language Translation (English to German, Chinese to English, etc..)
● Document Summarization
● Question Answering
● Sentence Completion
● Attribute extraction (Key information extraction from the documents)
● Chatbot interactions
● Topic classification
● Intent extraction
● Grammar or Sentence correction
● Image captioning
● Document Ranking
● Natural Language inference

Which NLP model gives the best accuracy amongst the following: BERT, XLNET, GPT-2, ELMo

XLNET has given best accuracy amongst all the models. It has outperformed BERT on 20 tasks and achieves state of art results on 18 tasks including sentiment analysis, question answering, natural language inference, etc.

What is Naive Bayes algorithm, When we can use this algorithm in NLP?

Naive Bayes algorithm is a collection of classifiers which works on the principles of the Bayes’theorem. This series of NLP model forms a family of algorithms that can be used for a wide range of classification tasks including sentiment prediction, filtering of spam, classifying documents and more.
Naive Bayes algorithm converges faster and requires less training data. Compared to other discriminative models like logistic regression, Naive Bayes model  takes lesser time to train. This algorithm is perfect for use while working with multiple classes and text classification where the data is dynamic and changes frequently.

Explain Dependency Parsing in NLP?

Dependency Parsing, also known as Syntactic parsing in NLP is a process of assigning syntactic structure to a sentence and identifying its dependency parses. This process is crucial to understand the correlations between the “head” words in the syntactic structure.
The process of dependency parsing can be a little complex considering how any sentence can have more than one dependency parses. Multiple parse trees are known as ambiguities.
Dependency parsing needs to resolve these ambiguities in order to effectively assign a syntactic structure to a sentence.
Dependency parsing can be used in the semantic analysis of a sentence apart from the syntactic structuring.

What is text Summarization?

Text summarization is the process of shortening a long piece of text with its meaning and effect intact. Text summarization intends to create a summary of any given piece of text and outlines the main points of the document. This technique has improved in recent times and is capable of summarizing volumes of text successfully.
Text summarization has proved to a blessing since machines can summarize large volumes of text in no time which would otherwise be really time-consuming. There are two types of text summarization:
● Extraction-based summarization
● Abstraction-based summarization

What is NLTK? How is it different from Spacy?

NLTK or Natural Language Toolkit is a series of libraries and programs that are used for symbolic and statistical natural language processing. This toolkit contains some of the most powerful libraries that can work on different ML techniques to break down and understand human language. NLTK is used for Lemmatization, Punctuation, Character count, Tokenization, and Stemming.
The difference between NLTK and Spacey are as follows:
● While NLTK has a collection of programs to choose from, Spacey contains only the best suited algorithm for a problem in its toolkit
● NLTK supports a wider range of languages compared to Spacey (Spacey supports only 7 languages)
● While Spacey has an object-oriented library, NLTK has a string processing library
● Spacey can support word vectors while NLTK cannot

What is information extraction?

Information extraction in the context of Natural Language Processing refers to the technique of extracting structured information automatically from unstructured sources to ascribe meaning to it. This can include extracting information regarding attributes of entities, relationship between different entities and more. The various models of information extraction includes:
● Tagger Module
● Relation Extraction Module
● Fact Extraction Module
● Entity Extraction Module
● Sentiment Analysis Module
● Network Graph Module
● Document Classification & Language Modeling Module

What is Bag of Words?

Bag of Words is a commonly used model that depends on word frequencies or occurrences to train a classifier. This model creates an occurrence matrix for documents or sentences irrespective of its grammatical structure or word order.

What is Pragmatic Ambiguity in NLP?

Pragmatic ambiguity refers to those words which have more than one meaning and their use in any sentence can depend entirely on the context. Pragmatic ambiguity can result in multiple interpretations of the same sentence. More often than not, we come across sentences which have words with multiple meanings, making the sentence open to interpretation. This multiple interpretation causes ambiguity and is known as Pragmatic ambiguity in NLP.

What is a Masked Language Model?

Masked language models help learners to understand deep representations in downstream tasks by taking an output from the corrupt input. This model is often used to predict the words to be used in a sentence.

What are the best NLP Tools?

Some of the best NLP tools from open sources are:
● SpaCy
● TextBlob
● Textacy
● Natural language Toolkit
● Retext
● NLP.js
● Stanford NLP
● CogcompNLP

What is POS tagging?

Parts of speech tagging better known as POS tagging refers to the process of identifying specific words in a document and group them as part of speech, based on its context. POS tagging is also known as grammatical tagging since it involves understanding grammatical structures and identifying the respective component.
POS tagging is a complicated process since the same word can be different parts of speech depending on the context. The same generic process used for word mapping is quite ineffective for POS tagging because of the same reason.

What is NES?

Name entity recognition is more commonly known as NER is the process of identifying specific entities in a text document which are more informative and have a unique context. These often denote places, people, organizations, and more. Even though it seems like these entities are proper nouns, the NER process is far from identifying just the nouns. In fact, NER involves entity
chunking or extraction wherein entities are segmented to categorize them under different predefined classes. This step further helps in extracting information.

Explain the Masked Language Model?

Masked language modelling is the process in which the output is taken from the corrupted input.
This model helps the learners to master the deep representations in downstream tasks. You can predict a word from the other words of the sentence using this model.

What is pragmatic analysis in NLP?

Pragmatic Analysis: It deals with outside word knowledge, which means knowledge that is external to the documents and/or queries. Pragmatics analysis that focuses on what was described is reinterpreted by what it actually meant, deriving the various aspects of language that require real-world knowledge.

What is perplexity in NLP?

The word “perplexed” means “puzzled” or “confused”, thus Perplexity in general means the inability to tackle something complicated and a problem that is not specified. Therefore, Perplexity in NLP is a way to determine the extent of uncertainty in predicting some text.
In NLP, perplexity is a way of evaluating language models. Perplexity can be high and low; Low perplexity is ethical because the inability to deal with any complicated problem is less while high perplexity is terrible because the failure to deal with a complicated is high.

What is ngram in NLP?

N-gram in NLP is simply a sequence of n words, and we also conclude the sentences which appeared more frequently, for example, let us consider the progression of these three words:
● New York (2 gram)
● The Golden Compass (3 gram)
● She was there in the hotel (4 gram)
Now from the above sequence, we can easily conclude that sentence (a) appeared more frequently than the other two sentences, and the last sentence(c) is not seen that often. Now if we assign probability in the occurrence of an n-gram, then it will be advantageous. It would help in making next-word predictions and in spelling error corrections.

Explain differences between AI, Machine Learning and NLP

Why self-attention is awesome?

“In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece and byte-pair representations.” — from Attention is all you need.

Machine Learning For Dummies  on iOs

 

Machine Learning For Dummies on Windows

 

Machine Learning For Dummies Web/Android 

 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What are stop words?

 

Stop words are said to be useless data for a search engine. Words such as articles, prepositions, etc. are considered as stop words. There are stop words such as was, were, is, am, the, a, an, how, why, and many more. In Natural Language Processing, we eliminate the stop words to understand and analyze the meaning of a sentence. The removal of stop words is one of the most important tasks for search engines. Engineers design the algorithms of search engines in such a way that they ignore the use of stop words. This helps show the relevant search result for a query.

What is Latent Semantic Indexing (LSI)?

Latent semantic indexing is a mathematical technique used to improve the accuracy of the information retrieval process. The design of LSI algorithms allows machines to detect the hidden (latent) correlation between semantics (words). To enhance information understanding, machines generate various concepts that associate with the words of a sentence.
The technique used for information understanding is called singular value decomposition. It is generally used to handle static and unstructured data. The matrix obtained for singular value decomposition contains rows for words and columns for documents. This method best suits to identify components and group them according to their types.
The main principle behind LSI is that words carry a similar meaning when used in a similar context.
Computational LSI models are slow in comparison to other models. However, they are good at contextual awareness that helps improve the analysis and understanding of a text or a document.

What are Regular Expressions?

A regular expression is used to match and tag words. It consists of a series of characters for matching strings.
Suppose, if A and B are regular expressions, then the following are true for them:
● If {ɛ} is a regular language, then ɛ is a regular expression for it.
● If A and B are regular expressions, then A + B is also a regular expression within the language {A, B}.
● If A and B are regular expressions, then the concatenation of A and B (A.B) is a regular expression.
● If A is a regular expression, then A* (A occurring multiple times) is also a regular expression.

What are unigrams, bigrams, trigrams, and n-grams in NLP?

When we parse a sentence one word at a time, then it is called a unigram. The sentence parsed two words at a time is a bigram.
When the sentence is parsed three words at a time, then it is a trigram. Similarly, n-gram refers to the parsing of n words at a time.

What are the steps involved in solving an NLP problem?

Below are the steps involved in solving an NLP problem:

1. Gather the text from the available dataset or by web scraping
2. Apply stemming and lemmatization for text cleaning
3. Apply feature engineering techniques
4. Embed using word2vec
5. Train the built model using neural networks or other Machine Learning techniques
6. Evaluate the model’s performance
7. Make appropriate changes in the model
8. Deploy the model

There have some various common elements of natural language processing. Those elements are very important for understanding NLP properly, can you please explain the same in details with an example?

There have a lot of components normally using by natural language processing (NLP). Some of the major components are explained below:
● Extraction of Entity: It actually identifying and extracting some critical data from the available information which help to segmentation of provided sentence on identifying each entity. It can help in identifying one human that it’s fictional or real, same kind of reality identification for any organization, events or any geographic location etc.
● The analysis in a syntactic way: it mainly helps for maintaining ordering properly of the available words.

In the case of processing natural language, we normally mentioned one common terminology NLP and binding every language with the same terminology properly. Please explain in details about this NLP terminology with an example?

This is the basic NLP Interview Questions asked in an interview. There have some several factors available in case of explaining natural language processing. Some of the key factors are given below:

● Vectors and Weights: Google Word vectors, length of TF-IDF, varieties documents, word vectors, TF-IDF.
● Structure of Text: Named Entities, tagging of part of speech, identifying the head of the sentence.
● Analysis of sentiment: Know about the features of sentiment, entities available for the sentiment, sentiment common dictionary.
● Classification of Text: Learning supervising, set off a train, set of validation in Dev, Set of define test, a feature of the individual text, LDA.
● Reading of Machine Language: Extraction of the possible entity, linking with an individual entity, DBpedia, some libraries like Pikes or FRED.

Explain briefly about word2vec

Word2Vec embeds words in a lower-dimensional vector space using a shallow neural network.
The result is a set of word-vectors where vectors close together in vector space have similar meanings based on context, and word-vectors distant to each other have differing meanings. For example, apple and orange would be close together and apple and gravity would be relatively far.
There are two versions of this model based on skip-grams (SG) and continuous-bag-of-words (CBOW).

What are the metrics used to test an NLP model?

Accuracy, Precision, Recall and F1. Accuracy is the usual ratio of the prediction to the desired output. But going just be accuracy is naive considering the complexities involved.

What are some ways we can preprocess text input?

Here are several preprocessing steps that are commonly used for NLP tasks:
● case normalization: we can convert all input to the same case (lowercase or uppercase) as a way of reducing our text to a more canonical form
● punctuation/stop word/white space/special characters removal: if we don’t think these words or characters are relevant, we can remove them to reduce the feature space
● lemmatizing/stemming: we can also reduce words to their inflectional forms (i.e. walks → walk) to further trim our vocabulary
● generalizing irrelevant information: we can replace all numbers with a <NUMBER> token or all names with a <NAME> token.

How does the encoder-decoder structure work for language modelling?

The encoder-decoder structure is a deep learning model architecture responsible for several state of the art solutions, including Machine Translation.
The input sequence is passed to the encoder where it is transformed to a fixed-dimensional vector representation using a neural network. The transformed input is then decoded using another neural network. Then, these outputs undergo another transformation and a SoftMax layer. The final output is a vector of probabilities over the vocabularies. Meaningful information is extracted based on these probabilities.

How would you implement an NLP system as a service, and what are some pitfalls you might face in production?

This is less of a NLP question than a question for productionizing machine learning models. There are however certain intricacies to NLP models.

Without diving too much into the productionization aspect, an ideal Machine Learning service will have:
● endpoint(s) that other business systems can use to make inference
● a feedback mechanism for validating model predictions
● a database to store predictions and ground truths from the feedback
● a workflow orchestrator which will (upon some signal) re-train and load the new model for
serving based on the records from the database + any prior training data
● some form of model version control to facilitate rollbacks in case of bad deployments
● post-production accuracy and error monitoring

What are attention mechanisms and why do we use them?

This was a follow-up to the encoder-decoder question. Only the output from the last time step is passed to the decoder, resulting in a loss of information learned at previous time steps. This information loss is compounded for longer text sequences with more time steps.
Attention mechanisms are a function of the hidden weights at each time step. When we use attention in encoder-decoder networks, the fixed-dimensional vector passed to the decoder becomes a function of all vectors outputted in the intermediary steps.
Two commonly used attention mechanisms are additive attention and multiplicative attention. As the names suggest, additive attention is a weighted sum while multiplicative attention is a weighted multiplier of the hidden weights. During the training process, the model also learns weights for the attention mechanisms to recognize the relative importance of each time step.

How can we handle misspellings for text input?

By using word embeddings trained over a large corpus (for instance, an extensive web scrape of billions of words), the model vocabulary would include common misspellings by design. The model can then learn the relationship between misspelled and correctly spelled words to recognize their semantic similarity.
We can also preprocess the input to prevent misspellings. Terms not found in the model vocabulary can be mapped to the “closest” vocabulary term using:
● edit distance between strings
● phonetic distance between word pronunciations
● keyword distance to catch common typos

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is the problem with ReLu?

● Exploding gradient(Solved by gradient clipping)
● Dying ReLu — No learning if the activation is 0 (Solved by parametric relu)
● Mean and variance of activations is not 0 and 1.(Partially solved by subtracting around 0.5 from activation. Better explained in fastai videos)

What is the difference between learning latent features using SVD and getting embedding vectors using deep network?

SVD uses linear combination of inputs while a neural network uses nonlinear combination.

What is the information in the hidden and cell state of LSTM?

Hidden stores all the information till that time step and cell state stores particular information that might be needed in the future time step.

When is self-attention not faster than recurrent layers?

When the sequence length is greater than the representation dimensions. This is rare.

What is the benefit of learning rate warm-up?

Learning rate warm-up is a learning rate schedule where you have low (or lower) learning rate at the beginning of training to avoid divergence due to unreliable gradients at the beginning. As the model becomes more stable, the learning rate would increase to speed up convergence.

What’s the difference between hard and soft parameter sharing in multi-task learning?

What’s the difference between BatchNorm and LayerNorm?

BatchNorm computes the mean and variance at each layer for every minibatch whereas LayerNorm computes the mean and variance for every sample for each layer independently.

Hard sharing is where we train for all the task at the same time and update our weights using all the losses whereas soft sharing is where we train for one task at a time.

Batch normalisation allows you to set higher learning rates, increasing speed of training as it reduces the unstability of initial starting weights.

Difference between BatchNorm and LayerNorm?

BatchNorm — Compute the mean and var at each layer for every minibatch
LayerNorm — Compute the mean and var for every single sample for each layer independently

Why does the transformer block have LayerNorm instead of BatchNorm?

Looking at the advantages of LayerNorm, it is robust to batch size and works better as it works at the sample level and not batch level.

What changes would you make to your deep learning code if you knew there are errors in your training data?

We can do label smoothening where the smoothening value is based on % error. If any particular class has known error, we can also use class weights to modify the loss.

What are the tricks used in ULMFiT? (Not a great questions but checks the awareness)
● LM tuning with task text
● Weight dropout
● Discriminative learning rates for layers
● Gradual unfreezing of layers
● Slanted triangular learning rate schedule
This can be followed up with a question on explaining how they help.

Tell me a language model which doesn’t use dropout

ALBERT v2 — This throws a light on the fact that a lot of assumptions we take for granted are not necessarily true. The regularization effect of parameter sharing in ALBERT is so strong that dropouts are not needed. (ALBERT v1 had dropouts.)

What are the differences between GPT and GPT-2?

● Layer normalization was moved to the input of each sub-block, similar to a residual unit of type “building block” (differently from the original type “bottleneck”, it has batch normalization applied before weight layers).
● An additional layer normalization was added after the final self-attention block.
● A modified initialization was constructed as a function of the model depth.
● The weights of residual layers were initially scaled by a factor of 1/√n where n is the number of residual layers.
● Use larger vocabulary size and context size.

What are the differences between GPT and BERT?

● GPT is not bidirectional and has no concept of masking
● BERT adds next sentence prediction task in training and so it also has a segment embedding

What are the differences between BERT and ALBERT v2?

● Embedding matrix factorisation(helps in reducing no. of parameters)
● No dropout
● Parameter sharing(helps in reducing no. of parameters and regularisation)

How does parameter sharing in ALBERT affect the training and inference time?

No effect. Parameter sharing just decreases the number of parameters.

How would you reduce the inference time of a trained NN model?

● Serve on GPU/TPU/FPGA
● 16 bit quantisation and served on GPU with fp16 support
● Pruning to reduce parameters
● Knowledge distillation (To a smaller transformer model or simple neural network)
● Hierarchical softmax/Adaptive softmax
● You can also cache results as explained here.

Would you use BPE with classical models?

Of course! BPE is a smart tokeniser and it can help us get a smaller vocabulary which can help us find a model with less parameters.

How would you make an arxiv papers search engine? 

How would you make a plagiarism detector?

Get top k results with TF-IDF similarity and then rank results with
● semantic encoding + cosine similarity
● a model trained for ranking

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

How would you make a sentiment classifier?

This is a trick question. The interviewee can say all things such as using transfer learning and latest models but they need to talk about having a neutral class too otherwise you can have really good accuracy/f1 and still, the model will classify everything into positive or negative.
The truth is that a lot of news is neutral and so the training needs to have this class. The interviewee should also talk about how he will create a dataset and his training strategies like the selection of language model, language model fine-tuning and using various datasets for multitask learning.

What is the difference between regular expression and regular grammar?

A regular expression is the representation of natural language in the form of mathematical expressions containing a character sequence. On the other hand, regular grammar is the generator of natural language, defining a set of defined rules and syntax which the strings in the natural language must follow.

Why should we use Batch Normalization?

Once the interviewer has asked you about the fundamentals of deep learning architectures, they would move on to the key topic of improving your deep learning model’s performance.
Batch Normalization is one of the techniques used for reducing the training time of our deep learning algorithm. Just like normalizing our input helps improve our logistic regression model, we can normalize the activations of the hidden layers in our deep learning model as well:

How is backpropagation different in RNN compared to ANN?

In Recurrent Neural Networks, we have an additional loop at each node:
This loop essentially includes a time component into the network as well. This helps in capturing sequential information from the data, which could not be possible in a generic artificial neural network.
This is why the backpropagation in RNN is called Backpropagation through Time, as in backpropagation at each time step.

Which of the following is a challenge when dealing with computer vision problems?

Variations due to geometric changes (like pose, scale, etc), Variations due to photometric factors (like illumination, appearance, etc) and Image occlusion. All the above-mentioned options are challenges in computer vision.

Consider an image with width and height as 100×100. Each pixel in the image can have a color from Grayscale, i.e. values. How much space would this image require for storing?

The answer will be 8x100x100 because 8 bits will be required to represent a number from 0-256

Why do we use convolutions for images rather than just FC layers?

Firstly, convolutions preserve, encode, and actually use the spatial information from the image. If we used only FC layers we would have no relative spatial information. Secondly, Convolutional Neural Networks (CNNs) have a partially built-in translation in-variance, since each convolution kernel acts as it’s own filter/feature detector

What makes CNN’s translation-invariant?

As explained above, each convolution kernel acts as it’s own filter/feature detector. So let’s say you’re doing object detection, it doesn’t matter where in the image the object is since we’re going to apply the convolution in a sliding window fashion across the entire image anyways.

Why do we have max-pooling in classification CNNs?

Max-pooling in a CNN allows you to reduce computation since your feature maps are smaller after the pooling. You don’t lose too much semantic information since you’re taking the maximum activation. There’s also a theory that max-pooling contributes a bit to giving CNN’s more translation in-variance. Check out this great video from Andrew Ng on the benefits of max-pooling.

Why do segmentation CNN’s typically have an encoder-decoder style/structure?

The encoder CNN can basically be thought of as a feature extraction network, while the decoder uses that information to predict the image segments by “decoding” the features and upscaling to the original image size.

What is the significance of Residual Networks?

The main thing that residual connections did was allow for direct feature access from previous layers. This makes information propagation throughout the network much easier. One very interesting paper about this shows how using local skip connections gives the network a type of ensemble multi-path structure, giving features multiple paths to propagate throughout the network.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is batch normalization and why does it work?

Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. The idea is then to normalize the inputs of each layer in such a way that they have a mean output activation of zero and a standard deviation of one. This is done for each individual mini-batch at each layer i.e compute the mean and variance of that mini-batch alone, then normalize. This is analogous to how the inputs to networks are standardized. How does this help? We know that normalizing the inputs to a network helps it learn.
But a network is just a series of layers, where the output of one layer becomes the input to the next. That means we can think of any layer in a neural network as the first layer of a smaller subsequent network. Thought of as a series of neural networks feeding into each other, we normalize the output of one layer before applying the activation function and then feed it into the following layer (sub-network).

Why would you use many small convolutional kernels such as 3×3 rather than a few large ones?

This is very well explained in the VGGNet paper.

There are 2 reasons: First, you can use several smaller kernels rather than few large ones to get the same receptive field and capture more spatial context, but with the smaller kernels you are using less parameters and computations. Secondly, because with smaller kernels you will be using more filters, you’ll be able to use more activation functions and thus have a more discriminative mapping function being learned by your CNN.

What is Precision?

Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances
Precision = true positive / (true positive + false positive)

What is Recall?

Recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instances.
Recall = true positive / (true positive + false negative)

Define F1-score.

It is the weighted average of precision and recall. It considers both false positive and false negatives into account. It is used to measure the model’s performance.

What is cost function?

The cost function is a scalar function that Quantifies the error factor of the Neural Network. Lower the cost function better than the Neural network. Eg: MNIST Data set to classify the image, the input image is digit 2 and the Neural network wrongly predicts it to be 3.

List different activation neurons or functions

● Linear Neuron
● Binary Threshold Neuron
● Stochastic Binary Neuron
● Sigmoid Neuron
● Tanh function
● Rectified Linear Unit (ReLU)

Define Learning rate

The learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect to the loss gradient.

What is Momentum (w.r.t NN optimization)?

Momentum lets the optimization algorithm remembers its last step, and adds some proportion of it to the current step. This way, even if the algorithm is stuck in a flat region, or a small local minimum, it can get out and continue towards the true minimum.

What is the difference between Batch Gradient Descent and Stochastic Gradient Descent?

Batch gradient descent computes the gradient using the whole dataset. This is great for convex or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution, either local or global. Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in its basin of attraction.
Stochastic gradient descent (SGD) computes the gradient using a single sample. SGD works well (Not well, I suppose, but better than batch gradient descent) for error manifolds that have lots of local maxima/minima. In this case, the somewhat noisier gradient calculated using the reduced number of samples tends to jerk the model out of local minima into a region that hopefully is more optimal.

Epoch vs Batch vs Iteration.

Epoch: one forward pass and one backward pass of all the training examples
Batch: examples processed together in one pass (forward and backward)
Iteration: number of training examples / Batch size

What is the vanishing gradient?

As we add more and more hidden layers, backpropagation becomes less and less useful in passing information to the lower layers. In effect, as information is passed back, the gradients begin to vanish and become small relative to the weights of the networks.

What are dropouts?

Dropout is a simple way to prevent a neural network from overfitting. It is the dropping out of some of the units in a neural network. It is similar to the natural reproduction process, where nature produces offsprings by combining distinct genes (dropping out others) rather than strengthening the co-adapting of them.

What is data augmentation? Can you give some examples?

Data augmentation is a technique for synthesizing new data by modifying existing data in such a way that the target is not changed, or it is changed in a known way. Computer vision is one of the fields where data augmentation is very useful. There are many modifications that we can do to images:
● Resize
● Horizontal or vertical flip
● Rotate, Add noise, Deform
● Modify colors Each problem needs a customized data augmentation pipeline. For example, on OCR, doing flips will change the text and won’t be beneficial; however, resizes and small rotations may help.

What are the components of GAN?

● Generator
● Discriminator

What’s the difference between a generative and discriminative model?

A generative model will learn categories of data while a discriminative model will simply learn the distinction between different categories of data. Discriminative models will generally outperform generative models on classification tasks.

What is Linear Filtering?

Linear filtering is a neighborhood operation, which means that the output of a pixel’s value is decided by the weighted sum of the values of the input pixels.

How can you achieve Blurring through Gaussian Filter?

This is the most common technique for blurring or smoothing an image. This filter improves the resulting pixel found at the center and slowly minimizes the effects as pixels move away from the center. This filter can also help in removing noise in an image.

How can you achieve Blurring through Gaussian Filter?

This is the most common technique for blurring or smoothing an image. This filter improves the resulting pixel found at the center and slowly minimizes the effects as pixels move away from the center. This filter can also help in removing noise in an image.

What is Non-Linear Filtering? How it is used?

Linear filtering is easy to use and implement. In some cases, this method is enough to get the necessary output. However, an increase in performance can be obtained through non-linear filtering. Through non-linear filtering, we can have more control and achieve better results when we encounter a more complex computer vision task.

Explain Median Filtering.

The median filter is an example of a non-linear filtering technique. This technique is commonly used for minimizing the noise in an image. It operates by inspecting the image pixel by pixel and taking the place of each pixel’s value with the value of the neighboring pixel median.
Some techniques in detecting and matching features are:
● Lucas-Kanade
● Harris
● Shi-Tomasi
● SUSAN (smallest uni value segment assimilating nucleus)
● MSER (maximally stable extremal regions)
● SIFT (scale-invariant feature transform)
● HOG (histogram of oriented gradients)
● FAST (features from accelerated segment test)
● SURF (speeded-up robust features)

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Describe the Scale Invariant Feature Transform (SIFT) algorithm

SIFT solves the problem of detecting the corners of an object even if it is scaled. Steps to implement this algorithm:
● Scale-space extrema detection – This step will identify the locations and scales that can still be recognized from different angles or views of the same object in an image.
● Keypoint localization – When possible key points are located, they would be refined to get accurate results. This would result in the elimination of points that are low in contrast or points that have edges that are deficiently localized.
● Orientation assignment – In this step, a consistent orientation is assigned to each key point to attain invariance when the image is being rotated.
● Keypoint matching – In this step, the key points between images are now linked to recognizing their nearest neighbors.

Why Speeded-Up Robust Features (SURF) came into existence?

SURF was introduced to as a speed-up version of SIFT. Though SIFT can detect and describe key points of an object in an image, still this algorithm is slow.

What is Oriented FAST and rotated BRIEF (ORB)?

This algorithm is a great possible substitute for SIFT and SURF, mainly because it performs better in computation and matching. It is a combination of fast key point detector and brief descriptor, which contains a lot of alterations to improve performance. It is also a great alternative in terms of cost because the SIFT and SURF algorithms are patented, which means that you need to buy them for their utilization.

What is image segmentation?

In computer vision, segmentation is the process of extracting pixels in an image that is related.
Segmentation algorithms usually take an image and produce a group of contours (the boundary of an object that has well-defined edges in an image) or a mask where a set of related pixels are assigned to a unique color value to identify it.
Popular image segmentation techniques:
● Active contours
● Level sets
● Graph-based merging
● Mean Shift
● Texture and intervening contour-based normalized cuts

What is the purpose of semantic segmentation?

The purpose of semantic segmentation is to categorize every pixel of an image to a certain class or label. In semantic segmentation, we can see what is the class of a pixel by simply looking directly at the color, but one downside of this is that we cannot identify if two colored masks belong to a certain object.

Explain instance segmentation.

In semantic segmentation, the only thing that matters to us is the class of each pixel. This would somehow lead to a problem that we cannot identify if that class belongs to the same object or not.
Semantic segmentation cannot identify if two objects in an image are separate entities. So to solve this problem, instance segmentation was created. This segmentation can identify two different objects of the same class. For example, if an image has two sheep in it, the sheep will be detected and masked with different colors to differentiate what instance of a class they belong to.

How is panoptic segmentation different from semantic/instance segmentation?

Panoptic segmentation is basically a union of semantic and instance segmentation. In panoptic segmentation, every pixel is classified by a certain class and those pixels that have several instances of a class are also determined. For example, if an image has two cars, these cars will be masked with different colors. These colors represent the same class — car — but point to different instances of a certain class.

Explain the problem of recognition in computer vision.

Recognition is one of the toughest challenges in the concepts in computer vision. Why is recognition hard? For the human eyes, recognizing an object’s features or attributes would be very easy. Humans can recognize multiple objects with very small effort. However, this does not apply to a machine. It would be very hard for a machine to recognize or detect an object because these objects vary. They vary in terms of viewpoints, sizes, or scales. Though these things are still challenges faced by most computer vision systems, they are still making advancements or approaches for solving these daunting tasks.

What is Object Recognition?

Object recognition is used for indicating an object in an image or video. This is a product of machine learning and deep learning algorithms. Object recognition tries to acquire this innate human ability, which is to understand certain features or visual detail of an image.

What is Object Detection and it’s real-life use cases?

Object detection in computer vision refers to the ability of machines to pinpoint the location of an object in an image or video. A lot of companies have been using object detection techniques in their system. They use it for face detection, web images, and security purposes.

Describe Optical Flow, its uses, and assumptions.

Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movement of object or camera. It is a 2D vector field where each vector is a displacement vector showing the movement of points from the first frame to the second
Optical flow has many applications in areas like :
● Structure from Motion
● Video Compression
● Video Stabilization
Optical flow works on several assumptions:
1. The pixel intensities of an object do not change between consecutive frames.
2. Neighboring pixels have similar motion.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is Histogram of Oriented Gradients (HOG)?

HOG stands for Histograms of Oriented Gradients. HOG is a type of “feature descriptor”. The intent of a feature descriptor is to generalize the object in such a way that the same object (in this case a person) produces as close as possible to the same feature descriptor when viewed under different conditions. This makes the classification task easier.

What’s the difference between valid and same padding in a CNN?

This question has more chances of being a follow-up question to the previous one. Or if you have explained how you used CNNs in a computer vision task, the interviewer might ask this question along with the details of the padding parameters.
● Valid Padding: When we do not use any padding. The resultant matrix after convolution will have dimensions (n – f + 1) X (n – f + 1)
● Same padding: Adding padded elements all around the edges such that the output matrix will have the same dimensions as that of the input matrix

What is BOV: Bag-of-visual-words (BOV)?

BOV also called the bag of key points, is based on vector quantization. Similar to HOG features, BOV features are histograms that count the number of occurrences of certain patterns within a patch of the image.

What is Poselets? Where are poselets used?

Poselets rely on manually added extra keypoints such as “right shoulder”, “left shoulder”, “right knee” and “left knee”. They were originally used for human pose estimation

Explain Textons in context of CNNs

A texton is the minimal building block of vision. The computer vision literature does not give a strict definition for textons, but edge detectors could be one example. One might argue that deep learning techniques with Convolution Neuronal Networks (CNNs) learn textons in the first filters.

What is Markov Random Fields (MRFs)?

MRFs are undirected probabilistic graphical models which are a wide-spread model in computer vision. The overall idea of MRFs is to assign a random variable for each feature and a random variable for each pixel.

Explain the concept of superpixel?

A superpixel is an image patch that is better aligned with intensity edges than a rectangular patch.
Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired.

What is Non-maximum suppression(NMS) and where is it used?

NMS is often used along with edge detection algorithms. The image is scanned along the image gradient direction, and if pixels are not part of the local maxima they are set to zero. It is widely used in object detection algorithms.

Describe the use of Computer Vision in Healthcare.

Computer vision has also been an important part of advances in health-tech. Computer vision algorithms can help automate tasks such as detecting cancerous moles in skin images or finding symptoms in x-ray and MRI scans

Describe the use of Computer Vision in Augmented Reality & Mixed Reality

Computer vision also plays an important role in augmented and mixed reality, the technology that enables computing devices such as smartphones, tablets, and smart glasses to overlay and embed virtual objects on real-world imagery. Using computer vision, AR gear detects objects in the real world in order to determine the locations on a device’s display to place a virtual object.
For instance, computer vision algorithms can help AR applications detect planes such as tabletops, walls, and floors, a very important part of establishing depth and dimensions and placing virtual objects in the physical world.

Describe the use of Computer Vision in Facial Recognition

Computer vision also plays an important role in facial recognition applications, the technology that enables computers to match images of people’s faces to their identities. Computer vision algorithms detect facial features in images and compare them with databases of face profiles.
Consumer devices use facial recognition to authenticate the identities of their owners. Social media apps use facial recognition to detect and tag users. Law enforcement agencies also rely on facial recognition technology to identify criminals in video feeds.

Describe the use of Computer Vision in Self-Driving Cars

Computer vision enables self-driving cars to make sense of their surroundings. Cameras capture video from different angles around the car and feed it to computer vision software, which then processes the images in real-time to find the extremities of roads, read traffic signs, detect other cars, objects, and pedestrians. The self-driving car can then steer its way on streets and highways, avoid hitting obstacles, and (hopefully) safely drive its passengers to their destination.

Explain famous Computer Vision tasks using a single image example.

Many popular computer vision applications involve trying to recognize things in photographs; for example:
Object Classification: What broad category of object is in this photograph?
Object Identification: Which type of a given object is in this photograph?
Object Verification: Is the object in the photograph?
Object Detection: Where are the objects in the photograph?
Object Landmark Detection: What are the key points for the object in the photograph?
Object Segmentation: What pixels belong to the object in the image?
Object Recognition: What objects are in this photograph and where are they?

Explain the distinction between Computer Vision and Image Processing.

Computer vision is distinct from image processing.
Image processing is the process of creating a new image from an existing image, typically simplifying or enhancing the content in some way. It is a type of digital signal processing and is not concerned with understanding the content of an image.
A given computer vision system may require image processing to be applied to raw input, e.g. pre-processing images.
Examples of image processing include:
● Normalizing photometric properties of the image, such as brightness or color.
● Cropping the bounds of the image, such as centering an object in a photograph.
● Removing digital noise from an image, such as digital artifacts from low light levels

Explain business use cases in computer vision.

● Optical character recognition (OCR)
● Machine inspection
● Retail (e.g. automated checkouts)
● 3D model building (photogrammetry)
● Medical imaging
● Automotive safety
● Match move (e.g. merging CGI with live actors in movies)
● Motion capture (mocap)
● Surveillance
● Fingerprint recognition and biometrics

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

What is the Boltzmann Machine?

One of the most basic Deep Learning models is a Boltzmann Machine, resembling a simplified version of the Multi-Layer Perceptron. This model features a visible input layer and a hidden layer — just a two-layer neural net that makes stochastic decisions as to whether a neuron should be on or off. Nodes are connected across layers, but no two nodes of the same layer are connected.

What Is the Role of Activation Functions in a Neural Network?

At the most basic level, an activation function decides whether a neuron should be fired or not. It accepts the weighted sum of the inputs and bias as input to any activation function. Step function,
Sigmoid, ReLU, Tanh, and Softmax are examples of activation functions.

What Is the Difference Between a Feedforward Neural Network and Recurrent Neural Network?

A Feedforward Neural Network signals travel in one direction from input to output. There are no feedback loops; the network considers only the current input. It cannot memorize previous inputs (e.g., CNN).

What Are the Applications of a Recurrent Neural Network (RNN)?

The RNN can be used for sentiment analysis, text mining, and image captioning. Recurrent Neural Networks can also address time series problems such as predicting the prices of stocks in a month or quarter.

What Are the Softmax and ReLU Functions?

Softmax is an activation function that generates the output between zero and one. It divides each output, such that the total sum of the outputs is equal to one. Softmax is often used for output layers.

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Machine Learning For Dummies
Machine Learning For Dummies

Machine Learning Techniques

What Is Overfitting, and How Can You Avoid It?

Overfitting is a situation that occurs when a model learns the training set too well, taking up random fluctuations in the training data as concepts. These impact the model’s ability to generalize and don’t apply to new data.
When a model is given the training data, it shows 100 percent accuracy—technically a slight loss. But, when we use the test data, there may be an error and low efficiency. This condition is known as overfitting.
There are multiple ways of avoiding overfitting, such as:
● Regularization. It involves a cost term for the features involved with the objective function
● Making a simple model. With lesser variables and parameters, the variance can be reduced
● Cross-validation methods like k-folds can also be used
● If some model parameters are likely to cause overfitting, techniques for regularization like LASSO can be used that penalize these parameters

What is meant by ‘Training set’ and ‘Test Set’?

We split the given data set into two different sections namely, ‘Training set’ and ‘Test Set’.
‘Training set’ is the portion of the dataset used to train the model.
‘Testing set’ is the portion of the dataset used to test the trained model.

How Do You Handle Missing or Corrupted Data in a Dataset?

One of the easiest ways to handle missing or corrupted data is to drop those rows or columns or replace them entirely with some other value.
There are two useful methods in Pandas:
● IsNull() and dropna() will help to find the columns/rows with missing data and drop them
● Fillna() will replace the wrong values with a placeholder value

How Do You Design an Email Spam Filter?

Building a spam filter involves the following process:

● The email spam filter will be fed with thousands of emails
● Each of these emails already has a label: ‘spam’ or ‘not spam.’
● The supervised machine learning algorithm will then determine which type of emails are being marked as spam based on spam words like the lottery, free offer, no money, full refund, etc.
● The next time an email is about to hit your inbox, the spam filter will use statistical analysis and algorithms like Decision Trees and SVM to determine how likely the email is spam
● If the likelihood is high, it will label it as spam, and the email won’t hit your inbox
● Based on the accuracy of each model, we will use the algorithm with the highest accuracy after testing all the models

Explain bagging.

Bagging, or Bootstrap Aggregating, is an ensemble method in which the dataset is first divided into multiple subsets through resampling.
Then, each subset is used to train a model, and the final predictions are made through voting or averaging the component models.
Bagging is performed in parallel.

What is the ROC Curve and what is AUC (a.k.a. AUROC)?

The ROC (receiver operating characteristic) the performance plot for binary classifiers of True Positive Rate (y-axis) vs. False Positive Rate (xaxis).
AUC is the area under the ROC curve, and it’s a common performance metric for evaluating binary classification models.
It’s equivalent to the expected probability that a uniformly drawn random positive is ranked before a uniformly drawn random negative.

 

What are the various Machine Learning algorithms?

 

 

What is cross-validation?

 

Reference: k-fold cross validation 

 

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation. Mainly used in backgrounds where the objective is forecast, and one wants to estimate how accurately a model will accomplish in practice.

 

Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.

 

It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.

 

The general procedure is as follows:
1. Shuffle the dataset randomly.
2. Split the dataset into k groups
3. For each unique group:
a. Take the group as a hold out or test data set
b. Take the remaining groups as a training data set
c. Fit a model on the training set and evaluate it on the test set
d. Retain the evaluation score and discard the model
4. Summarize the skill of the model using the sample of model evaluation scores

What are 3 data preprocessing techniques to handle outliers?

1. Winsorize (cap at threshold).
2. Transform to reduce skew (using Box-Cox or similar).
3. Remove outliers if you’re certain they are anomalies or measurement errors.

How much data should you allocate for your training, validation, and test sets?

You have to find a balance, and there’s no right answer for every problem.
If your test set is too small, you’ll have an unreliable estimation of model performance (performance statistic will have high variance). If your training set is too small, your actual model parameters will have a high variance.
A good rule of thumb is to use an 80/20 train/test split. Then, your train set can be further split into train/validation or into partitions for cross-validation.

What Is a False Positive and False Negative and How Are They Significant?

False positives are those cases which wrongly get classified as True but are False.
False negatives are those cases which wrongly get classified as False but are True.
In the term ‘False Positive’, the word ‘Positive’ refers to the ‘Yes’ row of the predicted value in
the confusion matrix. The complete term indicates that the system has predicted it as a positive, but the actual value is negative.

What’s a Fourier transform?

A Fourier transform is a generic method to decompose generic functions into a superposition of symmetric functions. Or as this more intuitive tutorial puts it, given a smoothie, it’s how we find the recipe. The Fourier transform finds the set of cycle speeds, amplitudes, and phases to match any time signal. A Fourier transform converts a signal from time to frequency domain — it’s a very common way to extract features from audio signals or other time series such as sensor data.

 

Machine Learning Cheat Sheets, Tutorial, Practical examples, References, Datasets

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Machine Learning Cheat Sheet

Machine Learning Cheat Sheets

Download it here

Credit: Remi Canard

TensorFlow Practical Examples and Tutorial

– Basic Models
Linear Regression
Logistic Regression
Word2Vec (Word Embedding)

– Neural Networks
Simple Neural Network
Convolutional Neural Network
Recurrent Neural Network (LSTM)
Bi-directional Recurrent Neural Network (LSTM)
Dynamic Recurrent Neural Network (LSTM)

-Unsupervised
Auto-Encoder
DCGAN (Deep Convolutional Generative Adversarial Networks)

-Utilities:
Save and Restore a model
Build Custom Layers & Modules

– Data Management
Load and Parse data
Build and Load TFRecords
Image Transformation (i.e. Image Augmentation)

TensorFlow Examples abd Tutorials

Download it here

Credit: Alex Wang

Cool MLOps repository of free talks, books, papers and more

Link to the repo:

Image preview

Machine Learning  Training Videos

 



References

1 https://springboard.com
2 https://simplilearn.com
3 https://geeksforgeeks.org
4 https://elitedatascience.com
5 https://analyticsvidhya.com
6 https://guru99.com
7 https://intellipaat.com
8 https://towardsdatascience.com
9 https://mygreatlearning.com
10 https://mindmajix.com
11 https://toptal.com
12 https://glassdoor.co.in
13 https://udacity.com
14 https://educba.com
15 https://analyticsindiamag.com
16 https://ubuntupit.com
17 https://javatpoint.com
18 https://quora.com
19 hackr.io
20 kaggle.com
21 https://www.linkedin.com/in/stevenouri/

 

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Explain differences between AI, Machine Learning and NLP

 

 

 

 

 

 

 

 

 

 

 

 

 

Artificial IntelligenceMachine LearningNatural Language Processing
It is the technique t create smarter machinesMachine Learning is the term used for systems that learn from experienceThis is the set of system that has the ability to understand the language
AI includes human interventionMachine Learning purely involves the working of computers and no human interventionNLP links both computer and human languages
Artificial intelligence is a broader concept than Machine LearningML is a narrow concept and is a subset of AI 

Top Machine Learning Algorithms for Predictions:

Top ML algorithms for predictions

TensorFlow Interview Questions and Answers

Tensorflow Interview Questions and Answers

Direct link here

Machine Learning For Dummies  on iOs

Machine Learning For Dummies on Windows

Machine Learning For Dummies Web/Android 

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Machine Learning For Dummies
Machine Learning For Dummies

Machine learning is just one component of a larger field called artificial intelligence (AI). AI researchers have done an excellent job at describing the fundamental problems they must solve to achieve intelligent behavior; these problems fall into four general categories: representation, reasoning, learning, and search.

Basically, all of AI research can be classified under these headings; for example, language understanding is a special case of representation (natural language), planning is a special case of reasoning (analogical logical inferences), learning to play chess is a special case of learning (policy search in the game tree), and table lookup is a special case of search (symbol-table lookups). We will focus on two: representation and search.

What follows are our ten favorite problems/areas for the next decade or so. Each one has been researched quite heavily already, but we think that there are no silver bullets yet discovered nor are there any obvious candidates lurking in the wings waiting to take over. Each area has a different flavor to it; all have something to offer the machine learning community, and we believe that many will find fertile ground for their own investigations.

Machine learning methods are useful on large problems, which is becoming increasingly important as applications such as speech recognition are moving into real-world situations outside the lab (e.g., using voice commands while driving). Solution: This is a difficult one because there are many possible solutions to this problem; all will require advances in both theoretical and experimental techniques but we do not know what they are yet. A better understanding of why certain learning algorithms work well on some types of problems but not others may provide insights into how to scale them up. Some examples of the types of problems we would like to tackle include: (i) learning from large databases, (ii) learning in multiple domains, and (iii) learning task-specific knowledge.

Artificial intelligence methods have been used to solve combinatorial problems such as chess playing and problem-solving; these are problems that can be represented as a search tree using nodes representing possible moves for each player. These methods work well on small problems but often fail when applied to larger real-world problems because there are too many options in the search trees that must be explored. For example, consider a game where there are 100 moves per second for each player with 10^100 different games possible over a 40 year lifetime. Solving the AI problem amounts to finding a winning strategy. This is much different from the type of problems we are used to solving which normally fit in memory and where the number of potential options can be kept manageable. Solution: We need better methods than those currently available for searching through very large trees; these could involve ideas from machine learning, such as neural networks or evolutionary algorithms.

Searching for solutions to a problem among all possible alternatives is an important capability but one that has not been researched nearly enough due to its complexity. A brute-force search would seem to require enumerating all alternatives, which is impossible even on extremely simple problems, whereas other approaches seem so specialized that they have little value outside their specific domain (and sometimes not even there). In contrast, machine learning methods can be applied to virtually any problem where the solution space is finite (e.g., finding a path through a graph or board games like chess).

The brute-force approach of enumerating all possible combinations has been successfully applied to optimization problems where only a few desirable solutions are available, but there are many applications that require solving very large problems with thousands or millions of potential solutions. Examples include the Traveling Salesman Problem and scheduling tasks for an airline crew using dozens of variables (e.g., number of passengers flying, weight, the distance between origin and destination cities), a task which becomes more difficult because it must deal with occasional breakdowns in equipment. Any feasible algorithm will require shortcuts that often involve approximations or heuristics. Source.

What is the main purpose of using PCA on a dataset, and what are some examples of its application?

PCA is short for Principal Component Analysis, and it’s a technique used to reduce the dimensionality of a dataset. In other words, it helps you to find the important Variables in a dataset and get rid of the noise. PCA is used in a variety of fields, from image recognition to facial recognition to machine learning.

PCA has a few main applications:
– Reducing the number of features in a dataset
– Finding relationships between features
– Identifying clusters in data
– Visualizing data

Let’s take a look at an example. Say you have a dataset with 1000 features (variables). PCA can help you reduce that down to, say, 10 features that explain the majority of variance in the data. This is helpful because it means you can build a model with far fewer features, which makes it simpler and faster. In addition, PCA can help you to find relationships between features and identify clusters in data. All of this can be extremely helpful in understanding and using your data.

PCA is an important tool in Machine Learning, and has a number of applications. The main purpose of PCA is to reduce the dimensionality of a dataset, while still retaining as much information as possible. This can be useful when dealing with very large datasets, as it can make training and testing faster and more efficient. PCA is also often used for data visualization, as it can help to create clear and concise visualizations of high-dimensional data. Finally, PCA can be used for feature selection, as it can help to identify the most important features in a dataset. PCA is a powerful tool that can be applied in many different ways, and is an essential part of any Machine Learning workflow.

What are subservient sounding male names suitable for an automated assistant?

Artificial intelligence is increasingly becoming a staple in our lives, with everything from our homes to our workplaces being automated to some degree. And as AI becomes more ubiquitous, we are starting to see a trend of subservient-sounding names being given to male automated assistants. This is likely due to a combination of factors, including the fact that women are still primarily seen as domestic servants and the fact that many people find it easier to relate to a male voice. Whatever the reason, it seems that subservient-sounding names are here to stay when it comes to male AI. So if you’re looking for a name for your new automated assistant, here are some subservient-sounding male names to choose from:

– Jasper: A popular name meaning “treasurer” or “bringer of riches.”
– Custer: A name derived from the Latin word for “servant.”
– Luther: A Germanic name meaning “army of warriors.”
– Benson: A name of English origin meaning “son of Ben.”
– Wilfred: A name of Germanic origin meaning “desires peace.”

In recent years, there has been an increasing trend of using subservient sounding male names for automated assistants. Artificial intelligence is becoming more prevalent in our everyday lives, and automation is slowly but surely taking over many routine tasks. As such, it’s no surprise that we’re seeing a name trend emerge that reflects our growing dependence on these technologies. So what are some suitable names for an automated assistant? How about “Robo-Bob”? Or “Mecha-Mike”? Perhaps even “Cyber-Steve”? Whatever you choose, just be sure to pick a name that sounds suitably subservient! After all, your automated assistant should reflect your growing dependency on technology… and not your growing dominance over it!

How do you calculate user churn rate?

Churn rate is a metric that measures the percentage of users who leave or discontinue using a service within a given time period. The churn rate is an important metric for businesses to track because it can help them identify areas where their product or service is losing users. There are many ways to calculate the churn rate, but one of the most popular methods is to use machine learning or artificial intelligence. Artificial intelligence can help identify patterns in user behavior that may indicate that someone is about to leave the service. By tracking these patterns, businesses can be proactive in addressing user needs and reducing the chances of losing them. In addition, automation can also help reduce the churn rate by making it easier for users to stay with the service. Automation can handle tasks like customer support and billing, freeing up users’ time and making it less likely that they will discontinue their subscription. By using machine learning and artificial intelligence, businesses can more accurately predict and prevent user churn.

There are a few different ways to calculate the user churn rate using artificial intelligence. One way is to use a technique called Artificial Neural Networks. This involves training a computer to recognize patterns in data. Once the computer has learned to recognize these patterns, it can then make predictions about future data. Another way to calculate the user churn rate is to use a technique called Support Vector Machines. This approach uses algorithms to find the boundaries between different groups of data. Once these boundaries have been found, the algorithm can then make predictions about new data points. Finally, there is a technique called Bayesian inference. This approach uses probability theory to make predictions about future events. By using these three techniques, it is possible to calculate the user churn rate with a high degree of accuracy.

 

#datascience #machinelearning

Machine Learning Techniques illustrated

How to confuse Machine Learning and AI?

Folks with no educational background taking a MOOC or two in deep learning, entering the field, and skipping over basic concepts in machine learning–specificity/sensitivity, the difference between supervised and unsupervised learning, linear regression, ensembles, proper design of a study/test, probability distributions… With enough MOOCs, you can sound like you know what you are doing, but as soon as something goes wrong or changes slightly, there’s no knowledge about how to fix it. Big problem in employment, particularly when hiring a first machine learning engineer/data scientist.. Source: Colleen Farrelly

With rapid developments of artificial intelligence (AI) technology, the use of AI technology to mine clinical data has become a major trend in medical industry. Utilizing advanced AI algorithms for medical image analysis, one of the critical parts of clinical diagnosis and decision-making, has become an active research area both in industry and academia. Recent applications of deep leaning in medical image analysis involve various computer vision-related tasks such as classification, detection, segmentation, and registration. Among them, classification, detection, and segmentation are fundamental and the most widely used tasks that can be done with Scale but the rest of the more demanding methods require a more sophisticated platform for example Tasq.

Although there exist a number of reviews on deep learning methods on medical image analysis, most of them emphasize either on general deep learning techniques or on specific clinical applications. The most comprehensive review paper is the work of Litjens et al. published in 2017. Deep learning is such a quickly evolving research field; numerous state-of-the-art works have been proposed since then.

AI Technologies in Medical Image Analysis

Different medical imaging modalities have their unique characteristics and different responses to human body structure and organ tissue and can be used in different clinical purposes. The commonly used image modalities for diagnostic analysis in clinic include projection imaging (such as X-ray imaging), computed tomography (CT), ultrasound imaging, and magnetic resonance imaging (MRI). MRI sequences include T1, T1-w, T2, T2-w, diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and fluid attenuation inversion recovery (FLAIR). Figure 1 demonstrates a few examples of medical image modalities and their corresponding clinical applications.

Image Classification for Medical Image Analysis

As a fundamental task in computer vision, image classification plays an essential role in computer-aided diagnosis. A straightforward use of image classification for medical image analysis is to classify an input image or a series of images as either containing one (or a few) of predefined diseases or free of diseases (i.e., healthy case). Typical clinical applications of image classification tasks include skin disease identification in dermatology, eye disease recognition in ophthalmology (such as diabetic retinopathy, glaucoma, and corneal diseases). Classification of pathological images for various cancers such as breast cancer and brain cancer also belongs to this area.

Convolutional neural network (CNN) is the dominant classification framework for image analysis. With the development of deep learning, the framework of CNN has continuously improved. AlexNet was a pioneer convolutional neural network, which was composed of repeated convolutions, each followed by ReLU and max pooling operation with stride for downsampling. The proposed VGGNet used convolution kernels and maximum pooling to simplify the structure of AlexNet and showed improved performance by simply increasing the number and depth of the network. Via combining and stacking , and convolution kernels and pooling, the inception network and its variants increased the width and the adaptability of the network. ResNet and DenseNet both used skip connections to relieve the gradient vanishing. SENet proposed a squeeze-and-excitation module which enabled the model to pay more attention to the most informative channel features. The family of EfficientNet applied AUTOML and a compound scaling method to uniformly scale the width, depth, and resolution of the network in a principled way, resulting in improved accuracy and efficiency. Source: Kelly Holland

GPUs also process things. It’s just that they’re better and faster at “specific” things.

The main stuff a GPU is “awesome” at, exactly because it is designed to be specific with those: Matrix maths. The sorts of calculation used when converting a bunch of 3d points (XYZ values) into an approximation of how such a shape would look from a camera. I.e. rendering a 2d picture from a 3d object – exactly why a GPU is made in the first place: https://www.3dgep.com/3d-math-primer-for-game-programmers-matrices/

The sorts of calculations used in current “AI” ? Guess what? Matrix maths:

https://rossbulat.medium.com/ai-essentials-working-with-matrices-2ceb9ca3bd1b

By Irne Barnard

This is perhaps the most important question in computational learning theory. In fact, some of the most important theorems of machine learning like the No Free Lunch Theorem and the Fundamental Theorem of Statistical Learning are aimed at answering this very question.

Formally, the smallest number of data points needed for successfully learning a classification rule using a machine learning (ML) algorithm is called the sample complexity of the algorithm. Now, you might wonder why sample complexity is such a big deal. It’s because sample complexity is to ML algorithms what computational complexity is to any algorithm. It measures the minimum amount of resource (i.e. the data) that is required to achieve the desired goal.

There are several interesting answers to the question of sample complexity, that arise from various assumptions on the learner. In what follows, I will give the answer under some popular assumptions/scenarios.

Scenario 1: Perfect Learning

In our first scenario, we consider the problem of learning the correct hypothesis (classification rule) amongst a set of plausible hypotheses. The data is sampled independently from an unknown probability distribution.

It turns out that under no further assumptions on the data-generating probability distribution, the problem is impossible. In other words, there is no algorithm that can learn the correct classification rule perfectly from any finite amount of data. This result is called the No Free Lunch Theorem in machine learning. I’ve discuss this result in more detail here.

Scenario 2: Probably Approximately Correct (PAC) Learning

For the second scenario, we consider the problem of learning the correct hypothesis approximately, with high probability. That is, our algorithm may fail to identify even an approximately correct hypothesis with some small probability. This relaxation allows us to give a slightly more useful answer to the question.

The answer to this question is of the order of the VC-dimension of the hypothesis class. More precisely, if we want the algorithm to be approximately correct with an error of at most ϵϵ with a probability of at least 1δ1−δ, then we need a minimum of dϵlog(1ϵδ)dϵlog⁡(1ϵδ), where dd is the VC-dimension of the hypothesis class. Note that dd can be infinite for certain hypothesis classes. In that case, it is not possible to succeed in the learning task even approximately, even with high probability. On the other hand, if dd is finite, we say that the hypothesis class is (ϵ,δ)(ϵ,δ)−PAC learnable. (I explain PAC-learnability in more detail in this answer.)

Scenario 3: Learning with a Teacher

In the previous two scenarios, we assume that the data that is presented to the learner is randomly sampled from an unknown probability distribution. For this scenario, we do away with the randomness. Instead, we assume that the learner is presented with a carefully chosen set of training data points that are picked by a benevolent teacher. (By benevolent teacher, I mean that the teacher tries to make the learner guess the correct hypothesis with the fewest number of data points.)

In this case, the answer to the question is the teaching dimension. It is interesting to note that there is no straightforward relation between the teaching dimension and VC-dimension of a hypothesis class. They can be arbitrarily far from each other. (If you’re curious to know the relation between the two, here is a nice paper.)


In addition to these, there are other notions of “dimension” that characterize the sample complexity of a learning task under different scenarios. For example, there is the Littlestone dimension for online learning and Natarajan dimension for multi-class learning. Intuitively, these dimensions capture the inherent hardness of a machine learning task. The harder the task, the higher the dimension and the corresponding sample complexity.


To those of you seeking for exact numbers, here’s a note I added in the comments section: I wish I could add some useful empirical results, but the sample complexity bounds obtained by the PAC-learning approach are really loose to the point of being useless in case of most state-of-the-art ML algorithms like deep learning. So, the results I presented are basically a theoretical curiosity at this point. However, this might change in the near future as lots of researchers are working on strengthening this framework.

Source: Muni Sreenivas Pydi

As mentioned in the other answer, this can be understood using the concept of bias-variance tradeoff.

For any machine learning model, want to find a function that approximately fits your data. So, you essentially define the following:

  • Class of functions : Instead of searching in the space of all possible functions, you restrict the space of functions that the algorithm searches over. For example, a linear classifier will search among all possible lines, but will not consider more complex curves.
  • Loss function : This is used to compare two functions from the above class of functions. For instance, in SVM, you would prefer line 1 to line 2 if line 1 has a larger margin than line 2.

Now, the simpler your class of functions is, the smaller the amount of data required. To get some intuition for this, think about a regression problem that has three features. So, a linear function class will have the following form:

y=a0+a1x1+a2x2+a3x3y=a0+a1x1+a2x2+a3x3

Every point (p, q, r, s) in the 4-dimensional space corresponds to a function of the above form, namely y=p+qx1+rx2+sx3y=p+qx1+rx2+sx3. So, you need to find one point in that 4D space that fits your data well.

Now, if instead of the class of linear functions, you chose quadratic functions, your functions would be of the following form:

y=a0+a1x1+a2x2+ a3x3+a4x1x2+ a5x2x3+a6x1x3+ a7x21+a8x22+a9x23y = a0+a1x1+a2x2+a3x3 + a4x1x2+a5x2x3+a6x1x3+a7x12+a8x22+a9x32

So now, you have to search for the best point in a 10D space! Therefore, you need more data to distinguish these larger number of points from each other.

With that intuition, we can say that to learn from small amount of data, you want to define a small enough function class.

Note: While in the above example, we simply look at the no. of parameters to get a sense of complexity of the function class, in general, more parameters does not necessarily mean more complexity [for instance, if a lot of the parameters are strongly correlated].

Source: Prasoon Goyal

Best Machine Learning Books That All Data Scientists Must Read.

best ml books

1. Artificial Intelligence: A Modern Approach

Artificial Intelligence: A Modern Approach
Artificial Intelligence: A Modern Approach

Experts Opinions

I like this book very much. When in doubt I look there, and usually find what I am looking for, or I find references on where to go to study the problem more in depth. I like that it tries to show how various topics are interrelated, and to give general architectures for general problems … It is a jump in quality with respect to the AI books that were previously available. — Prof. Giorgio Ingargiola (Temple).

Really excellent on the whole and it makes teaching AI a lot easier. — Prof. Ram Nevatia (USC).

It is an impressive book, which begins just the way I want to teach, with a discussion of agents, and ties all the topics together in a beautiful way. — Prof. George Bekey (USC). Buy it now

2. Deep Learning (Adaptive Computation and Machine Learning series)

Experts Opinions

“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.” —Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX.

“If you want to know here deep learning came from, what it is good for, and where it is going, read this book.” —Geoffrey Hinton FRS, Professor, University of Toronto, Research Scientist at Google. Buy it

3. Hands-On Machine Learning with Scikit-Learn, Keras, and TensorFlow: Concepts, Tools, and Techniques to Build Intelligent Systems 2nd Edition

Experts Opinions

“An exceptional resource to study Machine Learning. You will find clear-minded, intuitive explanations, and a wealth of practical tips.” —François Chollet, Author of Keras, author of Deep Learning with Python.

“This book is a great introduction to the theory and practice of solving problems with neural networks; I recommend it to anyone interested in learning about practical ML.” — Peter Warden, Mobile Lead for TensorFlow. Buy it.

4. Python Machine Learning – Second Edition: Machine Learning and Deep Learning with Python, scikit-learn, and TensorFlow 2nd Edition

 

First things first, I don’t think there are many questions of the form “Is it a good practice to always X in machine learning” where the answer is going to be definitive. Always? Always always? Across parametric, non-parametric, Bayesian, Monte Carlo, social science, purely mathematic, and million feature models? That’d be nice, wouldn’t it! Anyway feel free to check out this interactive demo from deepchecks.

Concretely though, here are a few ways in which: it just depends.

Some times when normalizing is good:

1) Several algorithms, in particular SVMs come to mind, can sometimes converge far faster on normalized data (although why, precisely, I can’t recall).

2) When your model is sensitive to magnitude, and the units of two different features are different, and arbitrary. This is like the case you suggest, in which something gets more influence than it should.

But of course — not all algorithms are sensitive to magnitude in the way you suggest. Linear regression coefficients will be identical if you do, or don’t, scale your data, because it’s looking at proportional relationships between them.

Some times when normalizing is bad:

1) When you want to interpret your coefficients, and they don’t normalize well. Regression on something like dollars gives you a meaningful outcome. Regression on proportion-of-maximum-dollars-in-sample might not.

2) When, in fact, the units on your features are meaningful, and distance does make a difference! Back to SVMs — if you’re trying to find a max-margin classifier, then the units that go into that ‘max’ matter. Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you’re going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!)

In I’m sure unsatisfying summary, the most general answer is that you need to ask yourself seriously what makes sense with the data, and model, you’re using. Source: ABC of Data Science and ML

How do you prepare data for XGBoost?

Data preparation is a critical step in the data science process, and it is especially important when working with XGBoost. XGBoost is a powerful machine learning algorithm that can provide accurate predictions on data sets of all sizes. However, in order to get the most out of XGBoost, it is important to prepare the data in a way that is conducive to machine learning. This means ensuring that the data is clean, feature engineering has been performed, and that the data is in a format that can be easily consumed by the algorithm. By taking the time to prepare the data properly, data scientists can significantly improve the performance of their machine learning models.

When preparing the dataset for your machine learning model, you should use one-hot encoding on what type of data?

In machine learning and data science, one-hot encoding is a process by which categorical data is converted into a format that is suitable for use with machine learning algorithms. The categorical data is first grouped by type, and then a binary value is assigned to each group. This binary value corresponds to the group’s position in the encoding scheme. For example, if there are three groups, the first group would be assigned a value of ‘0’, the second group would be assigned a value of ‘1’, and the third group would be assigned a value of ‘2’. One-hot encoding is often used when working with categorical data, as it can help to improve the performance of machine learning models. In addition, one-hot encoding can also make it easier to visualize the relationship between different categories.

In machine learning and data science, one-hot encoding is a method used to convert categorical features into numerical features. This is often necessary when working with machine learning models, as many models can only accept numerical input. However, one-hot encoding is not without its problems. The most significant issue is the potential for increased dimensionality – if a dataset has too many features, it can be difficult for the model to learn from the data. In addition, one-hot encoding can create sparse datasets, which can also be difficult for some machine learning models to handle. Despite these issues, one-hot encoding remains a popular method for preparing data for machine learning models.

A retail company wants to start personalizing product recommendations to visitors of their website. They have historical data of what products the users have purchased and want to implement the system for new users, prior to them purchasing a product. What’s one way of phrasing a machine learning problem for this situation?

For this retail company, a machine learning problem could be phrased as a prediction problem. The goal would be to build a model that can take in data about a new user (such as demographic information and web browsing history) and predict which products they are likely to purchase. This would allow the company to give each new user personalized product recommendations, increasing the chances of making a sale. Data science techniques such as feature engineering and model selection would be used to build the best possible prediction model. By phrasing the machine learning problem in this way, the retail company can make the most of their historical data and improve the user experience on their website.

There are many ways to frame a machine learning problem for a retail company that wants to start personalizing product recommendations to visitors of their website. One way is to focus on prediction: using historical data of what products users have purchased, can we predict which products new users will be interested in? This is a task that machine learning is well suited for, and with enough data, we can build a model that accurately predicts product interests for new users. Another way to frame the problem is in terms of classification: given data on past purchases, can we classify new users into groups based on their product interests? This would allow the retail company to more effectively target personalization efforts. There are many other ways to frame the machine learning problem, depending on the specific goals of the company. But no matter how it’s framed, machine learning can be a powerful tool for personalizing product recommendations.

A data scientist is trying to determine how a model is doing based on training evaluation. The train accuracy plateaus out at around 70% and the validation accuracy is 67%. How should the data scientist interpret these results?

When working with machine learning models, it is important to evaluate how well the model is performing. This can be done by looking at the train and validation accuracy. In this case, the train accuracy has plateaued at around 70% and the validation accuracy is 67%. There are a few possible explanations for this. One possibility is that the model is overfitting on the training data. This means that the model is able to accurately predict labels for the training data, but it is not as effective at generalizing to new data. Another possibility is that there is a difference in the distribution of the training and validation data. If the validation data is different from the training data, then it makes sense that the model would have a lower accuracy on the validation data. To determine which of these explanations is most likely, the data scientist should look at the confusion matrix and compare the results of the training and validation sets. If there are large differences between the two sets, then it is likely that either overfitting or a difference in distributions is to blame. However, if there isn’t a large difference between the sets, then it’s possible that 70% is simply the best accuracy that can be achieved given the data.

One important consideration in machine learning is how well a model is performing. This can be determined in a number of ways, but one common method is to split the data into a training set and a validation set. The model is then trained on the training data and evaluated on the validation data. If the model is performing well, we would expect to see a similar accuracy on both the training and validation sets. However, in this case the training accuracy plateaus out at around 70% while the validation accuracy is only 67%. This could be indicative of overfitting, where the model has fit the training data too closely and does not generalize well to new data. In this case, the data scientist should look for ways to improve the model so that it performs better on the validation set.

When updating your weights using the loss function, what dictates how much change the weights should have?

In machine learning and data science, the learning rate is a parameter that dictates how much change the weights should have when updating them using the loss function. The learning rate is typically a small value between 0 and 1. A higher learning rate means that the weights are updated more quickly, which can lead to faster convergence but can also lead to instability. A lower learning rate means that the weights are updated more slowly, which can lead to slower convergence but can also help avoid overfitting. The optimal learning rate for a given problem can be found through trial and error. The bias term is another parameter that can affect the weight updates. The bias term is used to prevent overfitting by penalizing models that make too many assumptions about the data. The initial weights are also important, as they determine where the model starts on the optimization landscape. The batch size is another important parameter, as it defines how many training examples are used in each iteration of weight updates. A larger batch size can lead to faster convergence, but a smaller batch size can help avoid overfitting. Finding the optimal values for all of these parameters can be a challenge, but doing so is essential for training high-quality machine learning models.

An ad tech company is using an XGBoost model to classify its clickstream data. The company’s Data Scientist is asked to explain how the model works to a group of non-technical colleagues. What is a simple explanation the Data Scientist can provide?

Machine learning is a form of artificial intelligence that allows computers to learn from data, without being explicitly programmed. machine learning is a powerful tool for solving complex problems, and XGBoost is a popular machine learning algorithm. machine learning algorithms like XGBoost work by building a model based on training data, and then using that model to make predictions on new data. In the case of the ad tech company, the Data Scientist has used XGBoost to build a model that can classify clickstream data. This means that the model can look at new data and predict which category it belongs to. For example, the model might be able to predict whether a user is likely to click on an ad or not. The Data Scientist can explain how the model works by showing how it makes predictions on new data.

Machine learning is a method of teaching computers to learn from data, without being explicitly programmed. machine learning is a subset of artificial intelligence (AI). The XGBoost algorithm is a machine learning technique used to create models that predict outcomes by learning from past data. XGBoost is an implementation of gradient boosting, which is a machine learning technique for creating models that make predictions by combining the predictions of multiple individual models. The XGBoost algorithm is highly effective and is used by many organizations, including ad tech companies, to classify their data. The Data Scientist can explain how the XGBoost model works by providing a simple explanation of machine learning and how the XGBoost algorithm works. machine learning is a method of teaching computers to learn from data, without being explicitly programmed. 

 

An ML Engineer at a real estate startup wants to use a new quantitative feature for an existing ML model that predicts housing prices. Before adding the feature to the cleaned dataset, the Engineer wants to visualize the feature in order to check for outliers and overall distribution and skewness of the feature. What visualization technique should the ML Engineer use? 

The machine learning engineer at the real estate startup should use a visualization technique in order to check for outliers and overall distribution and skewness of the new quantitative feature. There are many different visualization techniques that could be used for this purpose, but two of the most effective are histograms and scatterplots. A histogram can show the distribution of values for the new feature, while a scatterplot can help to identify any outliers. By visualizing the data, the engineer will be able to ensure that the new feature is of high quality and will not impact the performance of the machine learning model.

When updating your weights using the loss function, what dictates how much change the weights should have?

In machine learning and data science, the learning rate is a parameter that dictates how much change the weights should have when updating them using the loss function. The learning rate is typically a small value between 0 and 1. A higher learning rate means that the weights are updated more quickly, which can lead to faster convergence but can also lead to instability. A lower learning rate means that the weights are updated more slowly, which can lead to slower convergence but can also help avoid overfitting. The optimal learning rate for a given problem can be found through trial and error. The bias term is another parameter that can affect the weight updates. The bias term is used to prevent overfitting by penalizing models that make too many assumptions about the data. The initial weights are also important, as they determine where the model starts on the optimization landscape. The batch size is another important parameter, as it defines how many training examples are used in each iteration of weight updates. A larger batch size can lead to faster convergence, but a smaller batch size can help avoid overfitting. Finding the optimal values for all of these parameters can be a challenge, but doing so is essential for training high-quality machine learning models.

The loss function is a key component of machine learning algorithms, as it determines how well the model is performing. When updating the weights using the loss function, the learning rate dictates how much change the weights should have. The learning rate is a hyperparameter that can be tuned to find the optimal value for the model. The bias term is another important factor that can influence the weights. The initial weights can also play a role in how much change the weights should have. The batch size is another important factor to consider when updating the weights using the loss function.

A data scientist wants to clean and merge two small datasets stored in CSV format. What tool can they use to merge these datasets together?

As a data scientist, you often need to work with multiple datasets in order to glean insights that would be hidden in any one dataset on its own. In order to do this, you need to be able to clean and merge datasets quickly and efficiently. One tool that can help you with this task is Pandas. Pandas is a Python library that is specifically designed for data analysis. It offers a wide range of features that make it well-suited for merging datasets, including the ability to read in CSV format, clean data, and merge datasets with ease. In addition, Pandas integrates well with other machine learning libraries such as Scikit-learn, making it a valuable tool for data scientists.

As a data scientist, one of the most important skills is knowing how to clean and merge datasets. This can be a tedious and time-consuming process, but it is essential for machine learning and data science projects. There are several tools that data scientists can use to merge datasets, but one of the most popular options is pandas. Pandas is a Python library that offers a wide range of functions for data manipulation and analysis. Additionally, pandas has built-in support for reading and writing CSV files. This makes it an ideal tool for merging small datasets stored in CSV format. With pandas, data scientists can quickly and easily clean and merge their data, giving them more time to focus on other aspects of their projects.

A real estate company is building a linear regression model to predict housing prices for different cities in the US. Which of the following is NOT a good metric to measure performance of their regression model?

Machine learning is a subset of data science that deals with the design and development of algorithms that can learn from and make predictions on data. Linear regression is a machine learning algorithm used to predict numerical values based on a linear relationship between input variables. When building a linear regression model, it is important to choose an appropriate metric to measure the performance of the model. The F1 score, R-squared value, and mean-squared error are all valid metrics for measuring the performance of a linear regression model. However, the mean absolute error is not a good metric to use for this purpose, as it does not take into account the direction of the prediction error (i.e., whether the predicted value is higher or lower than the actual value). As such, using the mean absolute error as a metric for evaluating the performance of a linear regression model could lead to inaccurate results.

A real estate company wants to provide its customers with a more accurate prediction of the final sale price for houses they are considering in various cities. To do this, the company wants to use a fully connected neural network trained on data from the previous ten years of home sales, as well as other features. What kind of machine learning problem does this situation most likely represent?

Answer: Regression

Which feature of Amazon SageMaker can you use for preprocessing the data?

 
 

Answer: Amazon Sagemaker Notebook instances

Amazon SageMaker enables developers and data scientists to build, train, tune, and deploy machine learning (ML) models at scale. You can deploy trained ML models for real-time or batch predictions on unseen data, a process known as inference. However, in most cases, the raw input data must be preprocessed and can’t be used directly for making predictions. This is because most ML models expect the data in a predefined format, so the raw data needs to be first cleaned and formatted in order for the ML model to process the data.  You can use the Amazon SageMaker built-in Scikit-learn library for preprocessing input data and then use the Amazon SageMaker built-in Linear Learner algorithm for predictions.

What setting, when creating an Amazon SageMaker notebook instance, can you use to install libraries and import data?

Answer: LifeCycle Configuration

You work for the largest coffee chain in the world. You’ve recently decided to source beans from a new market to create new blends and flavors. These beans come from 30 different growers, in 3 different countries. In order to keep a consistent flavor, you have each grower send samples of their beans to your tasting baristas who rate the beans on 20 different dimensions. You now need to group the beans together so the supply can be diversified yet the flavor of the final product kept as consistent as possible.
What is one way you could convert this business situation into a machine learning problem?

 
 

Answer:

In which phase of the ML pipeline does the machine learn from the data?

 
 
 

Answer: Model Training

A text analytics company is developing a text classification model to detect whether a document involves offensive content or not. The training dataset included ten non-offensive documents for every one offensive document. Their model resulted in an accuracy score of 94%.
What can we conclude from this result?

Answer: Accuracy is the wrong metric here, because it can be heavily influenced by the large class (non-offensive documents).

A Machine Learning Engineer is creating a regression model for forecasting company revenue based on an internal dataset made up of past sales and other related data.

 

What metric should the Engineer use to evaluate the ML model?

 
Answer: Root Mean Squared error (RMSE)
Root Mean Square Error (RMSE) is the standard deviation of the residuals (prediction errors). Residuals are a measure of how far from the regression line data points are; RMSE is a measure of how spread out these residuals are. In other words, it tells you how concentrated the data is around the line of best fit.

An ML scientist has built a decision tree model using scikit-learn with 1,000 trees. The training accuracy for the model was 99.2% and the test accuracy was 70.3%. Should the Scientist use this model in production?

 
Answer:  No, because it is not generalizing well on the test set
 
 
 

The curse of dimensionality relates to which of the following?

 
Answer: A – A high number of features in a dataset

 

The curse of dimensionality relates to a high number of features in a dataset.

Curse of Dimensionality describes the explosive nature of increasing data dimensions and its resulting exponential increase in computational efforts required for its processing and/or analysis. This term was first introduced by Richard E.

A Data Scientist wants to include “month” as a categorical column in a training dataset for an ML model that is being built. However, the ML algorithm gives an error when the column is added to the training data. What should the Data Scientist do to add this column?

 

Answer:

StandardScaler standardizes a feature by subtracting the mean and then scaling to unit variance. Unit variance means dividing all the values by the standard deviation. StandardScaler does not meet the strict definition of scale I introduced earlier.

What is the primary reason that one might want to pick either random search or Bayesian optimization over grid search when performing hyperparameter optimization?

 
 
Answer: Random search and Bayesian methods leave smaller unexplored regions than grid searches
 
 
 

A Data Scientist trained an XGBoost model to classify internal documents for further inquiry, and now wants to evaluate the model’s performance by looking at the results visually. What technique should the Data Scientist use in this situation?

 
 
 

Machine Learning Pipeline Goals

Machine Learning Pipeline Goals
Machine Learning Pipeline Goals

Common Machine Learning Use Cases

Common Machine LEarning USe Cases
Common Machine LEarning USe Cases

What is a machine learning model?

What is a machine learning model?
What is a machine learning model?

What are features and weights meaning in Machine Learning?

Features meaning in ML
Features meaning in ML

 

Weights in ML
Weights in ML

 

Machine Learning Features and Weights
Machine Learning Features and Weights

Reference: Wiki

PRE-PROCESSING AND FEATURE ENGINEERING

Visualization

image

image

Outliers

Missing Value

Feature Engineering

 

Source: https://github.com/mortezakiadi/ML-Pipeline/wiki

How to Choose the right Sagemaker built-in algorithm?

How to chose the right built in algorithm in SageMaker?
How to chose the right built in algorithm in SageMaker?
Guide to choosing the right unsupervised learning algorithm
Guide to choosing the right unsupervised learning algorithm

 

Choosing the right  ML algorithm based on Data Type
Choosing the right ML algorithm based on Data Type

 

Choosing the right ML algo based on data type
Choosing the right ML algo based on data type

This is a general guide for choosing which algorithm to use depending on what business problem you have and what data you have. 

Machine Learning Deployment and Monitoring

Deployment and Monitoring

Machine Learning Model Performance Evaluation Metrics

image

Machine Learning Breaking News and Top Stories

  • [P] Some help?
    by /u/Kash112 (Machine Learning) on April 19, 2024 at 1:30 pm

    Hi there! I'm a student and I'm trying to train a folder with 200 images with stylegan3 (I want to create a morphing video synchronized with music) But..I'm having some issues regarding the GPU. Can you recommend some valid alternatives? Thank you ! submitted by /u/Kash112 [link] [comments]

  • [D] Embeddings search "drowning" in a sea of noise! Can you solve this riddle?
    by /u/grudev (Machine Learning) on April 19, 2024 at 1:21 pm

    I'm writing a proof of concept for a RAG application for hundreds of thousands of textual records stored in a Postgres DB, using pgvector to store embeddings ( and using an HNSW index). Vector dimensions are specified correctly. Currently running experiments using varied chunk sizes for the text and comparing two different embedding models. (actual chunk size can vary a little because I am not breaking words to force a size). nomic-embed-text snowflake-arctic-embed-m-long Here's the gist experiment: 1- Create embeddings for "n" documents 2- Create a list of queries/prompts for information that is assuredly contained in SOME of those documents. Examples: What were the events that happened at "location x"? What is the John Doe's nickname? Who were the patients that checked into "hospital name"? Tell me about a requisition made by the director of sales. ... 3- For each query/prompt, I run a cosine distance query and get the the nearest 5 matching chunks. 4- After calculating the average distance for all queries/chunks, the lowest value is, in theory, the best combination of model/chunk_size. This worked SUPER well with a small sample of documents (say ≃ 200), but once I added more documents I started noticing an issue. Some of the NEW documents contain lists of literally 30k+ names. Whenever I ran a query that contains names, chunks from the lists above are returned, EVEN IF THEY DON'T CONTAIN THE NAMES, or any of the other information presented in the prompt (this happens regardless of the chose chunk size or strategy). My theory is that when a chunk containing names is embedded, the resulting embedding contain a strong vector for the semantic meaning of "name", but the vectors that differentiate that name from others can be relatively weak. A chunk containing almost nothing but references to the vector for "name" is then considered very similar to the prompt's embeddings, despite the names themselves not matching. For those of you with more experience/understanding, am I wrong in these assumptions? Would you have any suggestions/workarounds? I have some ideas but would like to see if anyone faced the same issues. submitted by /u/grudev [link] [comments]

  • [R] The roles of value, key, and query in the diffusion model.
    by /u/Candid_Finish444 (Machine Learning) on April 19, 2024 at 1:10 pm

    I am trying to replace the key, query, and value in different prompts of the diffusion model for video editing. I want to understand why key, query, and value are effective and what they represent in the diffusion model. https://preview.redd.it/uoce1dh4rfvc1.png?width=1086&format=png&auto=webp&s=24d6504ca9c50d9f5924dd935204db6c15484a16 submitted by /u/Candid_Finish444 [link] [comments]

  • [D] What's with all these "new" models having old data?
    by /u/TheyreNorwegianMac (Machine Learning) on April 19, 2024 at 12:58 pm

    I asked a few of them (via Ollama) about WebGPU adoption and it turns out all of them are using old data. Here are the dates they gave me: wizardlm2:7b-q5_0: early 2023 LLAMA 3: August 2022 LLAMA 2: No date but gave similar answer to LLAMA 3 mistral: No date but very generic answer and I couldn't get it to divulge when it was updated last I also went online and asked ChatGPT and even it was January 2022. Are there newer models around? Edit: You can probably tell I'm new to this... submitted by /u/TheyreNorwegianMac [link] [comments]

  • Can generative AI really only get better from here? [Discussion]
    by /u/thedaveperry1 (Machine Learning) on April 19, 2024 at 11:26 am

    I'm not an ML expert, but I work with some, and I've been asking around the (virtual) office, as well as interviewing scholars. Based on my research, I wrote an article you can read here. It seems to me that, while the hardware and software supporting LLMs will pretty certainly improve, the data presents a more complicated story. There's the issue of model collapse: essentially, the idea that as models approximate the distributions of original data sets with finite sampling, they will inevitably cut off the tails of those distributions. And as they begin to sample their own approximations in future model generations, this will lead to a collapse of the model (unless it can continue to tap that original data source). Then there's the issue of error propagation across generations of LLMs. Mark Kon, at Boston University, suggests tools like watermarking to help keep our datasets clean moving forward (he described the problem as a bigger mouse/bigger mousetrap situation). Mike Chambers, one of my colleagues at AWS, basically argued as much or more can be accomplished at this point by cleaning our datasets as by ingesting ever more data. One related, long term takeaway is that LLMs and other models will probably start working to ingest new categories of data (beyond text and image) before too long. And that next paradigm shift is going to happen sooner than many of us think. Thoughts? submitted by /u/thedaveperry1 [link] [comments]

  • [P] How to obtain the mean and std from the rms to obtain the first prediction time for a time series case study ?
    by /u/Papytho (Machine Learning) on April 19, 2024 at 8:59 am

    Hello I am trying to implement this from a paper: First, select the first l sampling points in the sampling points of bearing faults and calculate the mean μ_rms and standard deviation σ_rms of their root mean square values, and establish a 3σ criterion- based judgment interval [μ_rms − 3σ_rms, μ_rms +3σ_rms] accordingly. 2) Second, calculate the RMS index for the l + 1 th point FPTl+1 and compare it with the decision interval in step 1. If its value is not in this range, then recalculate the judgment interval after making l =l + 1. If its value is within this range, a judgment is triggered once. 3) Finally, in order to avoid false triggers, three consecutive triggers are used as the identification basis for the final FPT, and make this time FPTl = FPT The paper title: Physics guided neural network: Remaining useful life prediction of rolling bearings using long short-term memory network through dynamic weighting of degradation process My question is: how do I get the μ_rms and σ_rms from the RMS? What I did in this case was first sample the data and then calculate the RMS on the samples. But then I recreate sequences from these RMS values (which doesn't seem logical to me) and then calculate the μ_rms and σ_rms. I do use this value I obtain to do the interval and compare it with the RMS value. But the problem is that by doing this, it triggers way too early. This is the code I have made: def find_fpt(rms_sample, sample): fpt_index = 0 trigger = 0 for i in range(len(rms_sample)): upper = np.mean(rms_sample[i] + 3 * np.std(rms_sample[i])) lower = np.mean(rms_sample[i] - 3 * np.std(rms_sample[i])) rms = np.mean(np.square(sample[i + 1]) ** 2) if upper > rms > lower: if trigger == 3: fpt_index = i break trigger += 1 else: trigger = 0 print(trigger) return fpt_index def sliding_window(data, window_size): return np.lib.stride_tricks.sliding_window_view(data, window_size) window_size = 20 list_bearing, list_rul = load_dataset_and_rul() sampling = sliding_window(list_bearing[0][::100], window_size) rms_values = np.sqrt(np.mean(np.square(sampling) ** 2, axis=1)) rms_sample = sliding_window(rms_values, window_size) fpt = find_fpt(rms_sample,sampling) submitted by /u/Papytho [link] [comments]

  • Any ways to improve TabNet..??? [D]
    by /u/Shoddy_Battle_5397 (Machine Learning) on April 19, 2024 at 8:06 am

    so i was experimenting with tabnet architecture by google https://arxiv.org/pdf/1908.07442.pdf and found that if the data has a lot of randomness and noice then only it can outperform based on my dataset, but traditional machine learning algo like xgboost, random forest do a better job at those dataset where the features are robust enough but they fail the zero shot test and the transformer show some accuracy in that, so i just wanted to check if its possible to merge both of the traditional techniques and the transformer architecture so that it can perform better at traditional ml algo datasets and also give a good zero shot accuracy. while trying to merge it i found that in the tabnet paper they assume that each feature is independent and do not provide any place for any relationship with the features itself but the Tabtransformer architecture takes it into account https://arxiv.org/pdf/2012.06678.pdf as well but doesnt have any feature selection as proposed in tabnet.... i tried to merge them but was stuck where i have to do feature selection on the basis of the dimension assigned to each feature, while this work i s done by sparsemax in the tabnet paper i cant find a way to do that... any help would be appreciated submitted by /u/Shoddy_Battle_5397 [link] [comments]

  • [R] Machine learning from 3D meshes and physical fields
    by /u/SatieGonzales (Machine Learning) on April 19, 2024 at 7:38 am

    Ansys has released an AutoML product for physical simulation called Ansys Sim AI (https://www.ansys.com/fr-fr/news-center/press-releases/1-9-24-ansys-launches-simai). As a machine learning engineer, I wonder what types of models can be used to train on 3D mesh data in STL format with physical fields. How can the varying dimensions of input and output data be managed for different geometric objects? Does anyone have any ideas on this topic? submitted by /u/SatieGonzales [link] [comments]

  • [D] Auto Scripting
    by /u/starcrashing (Machine Learning) on April 19, 2024 at 7:33 am

    I have been working on a project for the past couple of months, and I wanted to know if anyone had feedback or thoughts to fuel its completion. I built a lexer and parser using python and C tokens to create a language that reads a python script or file and utilizes hooks to amend or write new lines. It will be able to take even a blank Python file to write, test, and deliver a working program based on a single prompt provided initially by the user. The way it works is it uses GPTs API to call automated prompts that are built into the program. It creates a program by itself by only using 1 initial prompt by the user on the program. It is a python program with the language I named autoscripter built into it. I hope to finish it by the end of the year if not into next year. This is a very challenging project, but I believe it is the future of scripting, and I have no doubts Microsoft will release something on this sooner than later. Any thoughts? I created this first by designing a debugger that error corrected python code and realized that not only error correction could be automated, but also the entire scripting process could be left to a lot of automation. submitted by /u/starcrashing [link] [comments]

  • [Project] AI powered products in stores
    by /u/Complete-Holiday-610 (Machine Learning) on April 19, 2024 at 6:42 am

    I am working on a project regarding marketing of AI powered products in Retail stores. I am trying to find some products that market ‘AI’ as the forefront feature, eg Samsung’s BeSpoke AI series, Bmw’s AI automated driving etc. Need them to be physical products so I can go to stores and do research and survey. Any kind of help is appreciated. submitted by /u/Complete-Holiday-610 [link] [comments]

  • [Discussion] Are there specific technical/scientific breakthroughs that have allowed the significant jump in maximum context length across multiple large language models recently?
    by /u/analyticalmonk (Machine Learning) on April 19, 2024 at 6:28 am

    Latest releases of models such as GPT-4 and Claude have a significant jump in the maximum context length (4/8k -> 128k+). The progress in terms of number of tokens that can be processed by these models sound remarkable in % terms. What has led to this? Is this something that's happened purely because of increased compute becoming available during training? Are there algorithmic advances that have led to this? submitted by /u/analyticalmonk [link] [comments]

  • [D] Is neurips reviewer invitation email out this year?
    by /u/noname139713 (Machine Learning) on April 19, 2024 at 5:35 am

    Used to receive invitation by this time of the year. Maybe I am forgotten. submitted by /u/noname139713 [link] [comments]

  • Probability for Machine Learning [D]
    by /u/AffectionateCoyote86 (Machine Learning) on April 19, 2024 at 4:47 am

    I'm a recent engineering graduate who's switching roles from traditional software engineering ones to ML/AI focused ones. I've gone through an introductory probability course in my undergrad, but the recent developments such as diffusion models, or even some relatively older ones like VAEs or GANs require an advanced understanding of probability theory. I'm finding the math/concepts related to probability hard to follow when I read up on these models. Any suggestions on how to bridge the knowledge gap? submitted by /u/AffectionateCoyote86 [link] [comments]

  • [D] How to evaluate RAG - both retrieval and generation, when all I have is a set of PDF documents?
    by /u/awinml1 (Machine Learning) on April 19, 2024 at 4:43 am

    Say I have 1000 PDF docs that I use as input to a RAG Pipeline. I want to to evaluate different steps of the RAG pipeline so that I can measure: - Which embedding models work better for my data? - Which rerankers work and are they required? - Which LLMs give the most factual and coherent answers? How do I evaluate these steps of the pipeline? Based on my research, I found that most frameworks require labels for both retrieval and generation evaluation. How do I go about creating this data using a LLM? Are there any other techniques? Some things I found: For retrieval: Use a LLM to generate synthetic ranked labels for retrieval. Which LLM should I use? What best practices should I follow? Any code that I can look at for this? For Generated Text: - Generate Synthetic labels like the above for each generation. - Use a LLM as a judge to Rate each generation based on the context it got and the question asked. Which LLMs would you recommend? What techniques worked for you guys? submitted by /u/awinml1 [link] [comments]

  • [Project] RL project
    by /u/Valuable-Wishbone276 (Machine Learning) on April 19, 2024 at 4:36 am

    Hi everyone. I want to build this idea of mine for a class project, and I wanted some input from others. I want to build an AI algorithm that can play the game Drift Hunters (https://drift-hunters.co/drift-hunters-games). I imagine I have to build some reinforcement learning program, though I'm not sure exactly how to organize state representations and input data. I also imagine that I'd need my screen to be recorded for a continuous period of time to collect data. I chose this game since it's got three very basic commands(turn left, turn right, and drive forward) and the purpose of the game(which never ends) is to maximize drift score. Any ideas are much appreciated. lmk if u still need more info. Thanks everyone. submitted by /u/Valuable-Wishbone276 [link] [comments]

  • [R] Unifying Bias and Unfairness in Information Retrieval: A Survey of Challenges and Opportunities with Large Language Models
    by /u/KID_2_2 (Machine Learning) on April 19, 2024 at 4:34 am

    PDF: https://arxiv.org/abs/2404.11457 GitHub: https://github.com/KID-22/LLM-IR-Bias-Fairness-Survey ​ Abstract: With the rapid advancement of large language models (LLMs), information retrieval (IR) systems, such as search engines and recommender systems, have undergone a significant paradigm shift. This evolution, while heralding new opportunities, introduces emerging challenges, particularly in terms of biases and unfairness, which may threaten the information ecosystem. In this paper, we present a comprehensive survey of existing works on emerging and pressing bias and unfairness issues in IR systems when the integration of LLMs. We first unify bias and unfairness issues as distribution mismatch problems, providing a groundwork for categorizing various mitigation strategies through distribution alignment. Subsequently, we systematically delve into the specific bias and unfairness issues arising from three critical stages of LLMs integration into IR systems: data collection, model development, and result evaluation. In doing so, we meticulously review and analyze recent literature, focusing on the definitions, characteristics, and corresponding mitigation strategies associated with these issues. Finally, we identify and highlight some open problems and challenges for future work, aiming to inspire researchers and stakeholders in the IR field and beyond to better understand and mitigate bias and unfairness issues of IR in this LLM era. ​ https://preview.redd.it/3glvv92v6dvc1.png?width=2331&format=png&auto=webp&s=af66f2bf082620882f09ea744eda88cf06c67112 ​ https://preview.redd.it/d48pt3sw6dvc1.png?width=1126&format=png&auto=webp&s=2343460399473bde3f5e37c0bbcfdc88ffc81efb submitted by /u/KID_2_2 [link] [comments]

  • To build a better AI helper, start by modeling the irrational behavior of humans
    by Adam Zewe | MIT News (MIT News - Machine learning) on April 19, 2024 at 4:00 am

    A new technique can be used to predict the actions of human or AI agents who behave suboptimally while working toward unknown goals.

  • [D] Has anyone tried distilling large language models the old way?
    by /u/miladink (Machine Learning) on April 19, 2024 at 12:11 am

    So, nowadays, everyone is distilling rationales gathered from a large language model to another relatively smaller model. However, I remember from the old days that we did we train the small network to match the logits of the large network when doing distillation. Is this forgotten /tried and did not work today? submitted by /u/miladink [link] [comments]

  • Advancing technology for aquaculture
    by Lily Keyes | MIT Sea Grant (MIT News - Machine learning) on April 18, 2024 at 7:00 pm

    MIT Sea Grant students apply machine learning to support local aquaculture hatcheries.

  • Using deep learning to image the Earth’s planetary boundary layer
    by Haley Wahl | MIT Lincoln Laboratory (MIT News - Machine learning) on April 18, 2024 at 7:00 pm

    Lincoln Laboratory researchers are using AI to get a better picture of the atmospheric layer closest to Earth's surface. Their techniques could improve weather and drought prediction.




Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses
Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

How do you export data from BigQuery to a CSV file?

Google Cloud’s BigQuery is a powerful tool for storing and querying large data sets. However, sometimes you may need to export data from BigQuery in order to perform additional analysis or simply to have a backup. Thankfully, Google Cloud makes it easy to export data from BigQuery to a CSV file.

  • The first step is to select the dataset that you want to export.
  • Next, click on the “Export Table” button. In the pop-up window, select “CSV” as the file format and choose a location to save the file.
  • Finally, click on the “Export” button and Google Cloud will begin exporting the data.
  • Depending on the size of the data set, this may take several minutes. Once the export is complete, you will have a CSV file containing all of the data from BigQuery.

Alternatively, Simply run the following command:

“bq extract –destination_format=CSV [dataset_name] [table_name]”.

This will export your data to a CSV file in Google Cloud Storage. You can then download the file from Google Cloud Storage and use it in another program. Alternatively, you can use the “bq load” command to load your data directly into another Google Cloud service, such as Google Sheets.

Person climbing a staircase. Learn Data Science from Scratch: online program with 21 courses

What is the Difference Between Mini-Batch and Full-Batch in Machine Learning?

In the field of machine learning, there are two types of batch sizes that are commonly used: mini-batch and full-batch. Both have their pros and cons, and the choice of which to use depends on the situation. Here’s a quick rundown of the differences between mini-batch and full-batch in machine learning.

Mini-Batch Machine Learning
Mini-batch machine learning is a type of batch processing where the data is divided into small batches before being fed into the machine learning algorithm. The advantage of mini-batch machine learning is that it can provide more accurate results than full-batch machine learning, since the data is less likely to be affected by outliers. However, the disadvantage of mini-batch machine learning is that it can be slower than full-batch machine learning, since each batch has to be processed separately.

Full-Batch Machine Learning
Full-batch machine learning is a type of batch processing where the entire dataset is fed into the machine learning algorithm at once. The advantage of full-batch machine learning is that it is faster than mini-batch machine learning, since all the data can be processed simultaneously. However, the disadvantage of full-batch machine learning is that it can be less accurate than mini-batch machine learning, since outliers in the dataset can have a greater impact on the results.

So, which should you use? It depends on your needs. If accuracy is more important than speed, then mini-batch machine learning is the way to go. On the other hand, if speed is more important than accuracy, then full-batch machine learning is the way to go.

The Difference Between Mini-Batch and Full-Batch Learning

In machine learning, there are two main types of batch learning: mini-batch and full-batch. Both types of batch learning algorithms have their own pros and cons that data scientists should be aware of. In this blog post, we’ll take a look at the difference between mini-batch and full-batch learning so you can make an informed decision about which type of algorithm is right for your project.

Mini-batch learning is a type of batch learning that operates on small subsets of the training data, typically referred to as mini-batches. The advantage of mini-batch learning is that it can be parallelized across multiple processors or devices, which makes training much faster than full-batch training. Another advantage is that mini-batches can be generated on the fly from a larger dataset, which is especially helpful if the entire dataset doesn’t fit into memory. However, one downside of mini-batch learning is that it can sometimes lead to suboptimal results due to its stochastic nature.

Full-Batch Learning
Full-batch learning is a type of batch learning that operates on the entire training dataset at once. The advantage of full-batch learning is that it converges to the global optimum more quickly than mini-batch or stochastic gradient descent (SGD) methods. However, the disadvantage of full-batch learning is that it is very slow and doesn’t scale well to large datasets. Additionally, full-batch methods can’t be parallelized across multiple processors or devices due to their sequential nature.

So, which type of batch learning algorithm is right for your project? If you’re working with a small dataset, then full-batch learning might be your best bet. However, if you’re working with a large dataset or need to train your model quickly, then mini=batch or SGD might be better suited for your needs. As always, it’s important to experiment with different algorithms and tuning parameters to see what works best for your particular problem.

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

Welcome to AWS Certification Machine Learning Specialty (MLS-C01) Practice Exams! 

This book is designed to help you prepare for the AWS Certified Machine Learning – Specialty (MLS-C01) exam and earn your AWS certification. The AWS Certified Machine Learning – Specialty (MLS-C01) exam is designed for individuals who have a strong understanding of machine learning concepts and techniques, and who can design, build, and deploy machine learning models on the AWS platform.

In this book, you will find a series of practice exams that are designed to mimic the format and content of the actual MLS-C01 exam. Each practice exam includes a set of multiple choice and multiple response questions that cover a range of topics, including machine learning concepts, techniques, and algorithms, as well as the AWS services and tools used to build and deploy machine learning models.

By working through these practice exams, you can test your knowledge, identify areas where you need further study, and gain confidence in your ability to pass the MLS-C01 exam. Whether you are a machine learning professional looking to earn your AWS certification or a student preparing for a career in machine learning, this book is an essential resource for your exam preparation.

2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams
2023 AWS Certified Machine Learning Specialty (MLS-C01) Practice Exams

What is the best Japanese natural language processing (NLP) library?

NLP is a field of computer science and artificial intelligence that deals with the interactions between computers and human languages. NLP algorithms are used to process and analyze large amounts of natural language data. Japanese NLP libraries are used to develop applications that can understand and respond to Japanese text.

The best Japanese NLP library depends on your application’s needs.

For example, if you are developing a machine translation application, you will need a library that supports word sense disambiguation and part-of-speech tagging. If you are developing a chatbot, you will need a library that supports sentence analysis and dialogue management. In general, Japanese NLP libraries can be divided into three categories: rule-based systems, statistical systems, and hybrid systems.

Rule-based systems rely on linguistic rules to process language data.

Statistical systems use statistical models to process language data.

Hybrid systems use a combination of linguistic rules and statistical models to process language data.

The best Japanese NLP library for your application will depend on the type of NLP tasks you need to perform and your resources (e.g., time, data, computing power).

XGBoost is a powerful tool that has a wide range of applications in the real world. XGBoost is a machine learning algorithm that is used to improve the performance of other machine learning algorithms.

XGBoost has been used to improve the performance of data science models in a variety of fields, including healthcare, finance, and retail.

In healthcare, XGBoost has been used to predict patient outcomes, such as length of stay in the hospital and mortality rates.

In finance, XGBoost has been used to predict stock prices and credit card fraud.

In retail, XGBoost has been used to improve customer segmentation and product recommendations.

XGBoost is a versatile tool that can be used to improve the performance of machine learning models in many different fields.

Machine Learning For Dummies  on iOs:  https://apps.apple.com/us/app/machinelearning-for-dummies-p/id1610947211

Machine Learning For Dummies on Windowshttps://www.microsoft.com/en-ca/p/machinelearning-for-dummies-ml-ai-ops-on-aws-azure-gcp/9p6f030tb0mt?

Machine Learning For Dummies Web/Android on Amazon: https://www.amazon.com/gp/product/B09TZ4H8V6

#MachineLearning #AI #ArtificialIntelligence #ML #MachineLearningForDummies #MLOPS #NLP #ComputerVision #AWSMachineLEarning #AzureAI #GCPML

Can AI predicts the tournament winner and golden booth for the FIFA World Cup Football Soccer 2022 in Qatar?

What are some ways we can use machine learning and artificial intelligence for algorithmic trading in the stock market?

How do we know that the Top 3 Voice Recognition Devices like Siri Alexa and Ok Google are not spying on us?

What are some good datasets for Data Science and Machine Learning?

What are popular hobbies among Software Engineers?

O(n) Contiguous Subarray in Python

O(n) Contiguous Subarray in Python

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

O(n) Contiguous Subarray in Python

You are given an array arr of N integers. For each index i, you are required to determine the number of contiguous subarrays that fulfills the following conditions:

  • The value at index i must be the maximum element in the contiguous subarrays, and
  • These contiguous subarrays must either start from or end on index i.

Signature int[] countSubarrays(int[] arr)Input

  • Array arr is a non-empty list of unique integers that range between 1 to 1,000,000,000
  • Size N is between 1 and 1,000,000

Output An array where each index i contains an integer denoting the maximum number of contiguous subarrays of arr[i]Example: arr = [3, 4, 1, 6, 2] output = [1, 3, 1, 5, 1]Explanation:

  • For index 0 – [3] is the only contiguous subarray that starts (or ends) with 3, and the maximum value in this subarray is 3.
  • For index 1 – [4], [3, 4], [4, 1]
  • For index 2 – [1]
  • For index 3 – [6], [6, 2], [1, 6], [4, 1, 6], [3, 4, 1, 6]
  • For index 4 – [2]

So, the answer for the above input is [1, 3, 1, 5, 1]

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

Solution in Python O(n)

O(n) Contiguous Subarray in Python

O(n) Rotational Cipher in Python

O(n) Rotational Cipher in Python

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

O(n) Rotational Cipher in Python

Rotational Cipher: One simple way to encrypt a string is to “rotate” every alphanumeric character by a certain amount. Rotating a character means replacing it with another character that is a certain number of steps away in normal alphabetic or numerical order. For example, if the string “Zebra-493?” is rotated 3 places, the resulting string is “Cheud-726?”. Every alphabetic character is replaced with the character 3 letters higher (wrapping around from Z to A), and every numeric character replaced with the character 3 digits higher (wrapping around from 9 to 0). Note that the non-alphanumeric characters remain unchanged. Given a string and a rotation factor, return an encrypted string.

Signature

string rotationalCipher(string input, int rotationFactor)

Input

1 <= |input| <= 1,000,000 0 <= rotationFactor <= 1,000,000

Output

Return the result of rotating input a number of times equal to rotationFactor.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

Example 1

input = Zebra-493?

rotationFactor = 3

output = Cheud-726?


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Example 2

input = abcdefghijklmNOPQRSTUVWXYZ0123456789

rotationFactor = 39

output = nopqrstuvwxyzABCDEFGHIJKLM9012345678

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

O(n) Solution in Python:

O(n) Rotational Cipher in Python

Test 1:

PS C:\dev\scripts> .\test_rotational_cipher_python.py
Length Dic Numbers : 10
Length Dic lowercase : 26
Length Dic uppercase : 26
Input: Zebra-493?
Output: Cheud-726?

Test2:

PS C:\dev\scripts> .\test_rotational_cipher_python.py
Length Dic Numbers : 10
Length Dic lowercase : 26
Length Dic uppercase : 26
Input: abcdefghijklmNOPQRSTUVWXYZ0123456789
Output: nopqrstuvwxyzABCDEFGHIJKLM9012345678

What is the programming model and best language for Hadoop and Spark? Python or Java?

Google PageRank

AI Dashboard is available on the Web, Apple, Google, and Microsoft, PRO version

Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs. Apache Hadoop is used mainly for Data Analysis

Apache Spark is an open-source distributed general-purpose cluster-computing framework. Spark provides an interface for programming entire clusters with implicit data parallelism and fault tolerance

The question is Which programming language is good to drive Hadoop and Spark?

The programming model for developing hadoop based applications is the map reduce. In other words, MapReduce is the processing layer of Hadoop.
MapReduce programming model is designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks. Hadoop MapReduce is a software framework for easily writing an application that processes the vast amount of structured and unstructured data stored in the Hadoop Distributed FileSystem (HDFS). The biggest advantage of map reduce is to make data processing on multiple computing nodes easy. Under the Map reduce model, data processing primitives are called Mapper and Reducers.

Spark is written in Scala and Hadoop is written in Java.

Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6
Get 20% off Google Workspace (Google Meet)  Business Plan (AMERICAS) with  the following codes:  C37HCAQRVR7JTFK Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)

Active Anti-Aging Eye Gel, Reduces Dark Circles, Puffy Eyes, Crow's Feet and Fine Lines & Wrinkles, Packed with Hyaluronic Acid & Age Defying Botanicals

The key difference between Hadoop MapReduce and Spark lies in the approach to processing: Spark can do it in-memory, while Hadoop MapReduce has to read from and write to a disk. As a result, the speed of processing differs significantly – Spark may be up to 100 times faster.

In-memory processing is faster when compared to Hadoop, as there is no time spent in moving data/processes in and out of the disk. Spark is 100 times faster than MapReduce as everything is done here in memory.

Spark’s hardware is more expensive than Hadoop MapReduce because it’s hardware needs a lot of RAM.


AI Unraveled: Demystifying Frequently Asked Questions on Artificial Intelligence (OpenAI, ChatGPT, Google Bard, Generative AI, Discriminative AI, xAI, LLMs, GPUs, Machine Learning, NLP, Promp Engineering)

Hadoop runs on Linux, it means that you must have knowldge of linux.

Java is important for hadoop because:

  • There are some advanced features that are only available via the Java API.
  • The ability to go deep into the Hadoop coding and figure out what’s going wrong.

In both these situations, Java becomes very important.
As a developer, you can enjoy many advanced features of Spark and Hadoop if you start with their native languages (Java and Scala).

If you are looking for an all-in-one solution to help you prepare for the AWS Cloud Practitioner Certification Exam, look no further than this AWS Cloud Practitioner CCP CLF-C02 book

What Python Offers for Hadoop and Spark?

  • Simple syntax– Python offers simple syntax which shows it is more user friendly than other two languages.
  • Easy to learn – Python syntax are like English languages. So, it much more easier to learn it and master it.
  • Large community support – Unlike Scala, Python has huge community (active), which we will help you to solve your queries.
  • Offers Libraries, frameworks and packages – Python has huge number of Scientific packages, libraries and framework, which are helping you to work in any environment of Hadoop and Spark.
  • Python Compatibility with Hadoop – A package called PyDoop offers access to the HDFS API for Hadoop and hence it allows to write Hadoop MapReduce program and application.

  • Hadoop is based off of Java (then so e.g. non-Hadoop yet still a Big-Data technology like the ElasticSearch engine, too – even though it processes JSON REST requests)
  • Spark is created off of Scala although pySpark (the lovechild of Python and Spark technologies of course) has gained a lot of momentum as of late.


If you are planning for Hadoop Data Analyst, Python is preferable given that it has many libraries to perform advanced analytics and also you can use Spark to perform advanced analytics and implement machine learning techniques using pyspark API.


The key Value pair is the record entity that MapReduce job receives for execution. In MapReduce process, before passing the data to the mapper, data should be first converted into key-value pairs as mapper only understands key-value pairs of data.
key-value pairs in Hadoop MapReduce is generated as follows:

Resources:

1- Quora

2- Wikipedia

3- Data Flair

Pass the 2023 AWS Cloud Practitioner CCP CLF-C02 Certification with flying colors Ace the 2023 AWS Solutions Architect Associate SAA-C03 Exam with Confidence Pass the 2023 AWS Certified Machine Learning Specialty MLS-C01 Exam with Flying Colors

List of Freely available programming books - What is the single most influential book every Programmers should read



#BlackOwned #BlackEntrepreneurs #BlackBuniness #AWSCertified #AWSCloudPractitioner #AWSCertification #AWSCLFC02 #CloudComputing #AWSStudyGuide #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AWSBasics #AWSCertified #AWSMachineLearning #AWSCertification #AWSSpecialty #MachineLearning #AWSStudyGuide #CloudComputing #DataScience #AWSCertified #AWSSolutionsArchitect #AWSArchitectAssociate #AWSCertification #AWSStudyGuide #CloudComputing #AWSArchitecture #AWSTraining #AWSCareer #AWSExamPrep #AWSCommunity #AWSEducation #AzureFundamentals #AZ900 #MicrosoftAzure #ITCertification #CertificationPrep #StudyMaterials #TechLearning #MicrosoftCertified #AzureCertification #TechBooks

Top 1000 Canada Quiz and trivia: CANADA CITIZENSHIP TEST- HISTORY - GEOGRAPHY - GOVERNMENT- CULTURE - PEOPLE - LANGUAGES - TRAVEL - WILDLIFE - HOCKEY - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
zCanadian Quiz and Trivia, Canadian History, Citizenship Test, Geography, Wildlife, Secenries, Banff, Tourism

Top 1000 Africa Quiz and trivia: HISTORY - GEOGRAPHY - WILDLIFE - CULTURE - PEOPLE - LANGUAGES - TRAVEL - TOURISM - SCENERIES - ARTS - DATA VISUALIZATION
Africa Quiz, Africa Trivia, Quiz, African History, Geography, Wildlife, Culture

Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada.
Exploring the Pros and Cons of Visiting All Provinces and Territories in Canada

Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA
Exploring the Advantages and Disadvantages of Visiting All 50 States in the USA


Health Health, a science-based community to discuss health news and the coronavirus (COVID-19) pandemic

Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.

Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.

    Feed has no items.

Turn your dream into reality with Google Workspace: It’s free for the first 14 days.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with  the following codes: 96DRHDRA9J7GTN6 96DRHDRA9J7GTN6
63F733CLLY7R7MM
63F7D7CPD9XXUVT
63FLKQHWV3AEEE6
63JGLWWK36CP7WM
63KKR9EULQRR7VE
63KNY4N7VHCUA9R
63LDXXFYU6VXDG9
63MGNRCKXURAYWC
63NGNDVVXJP4N99
63P4G3ELRPADKQU
With Google Workspace, Get custom email @yourcompany, Work from anywhere; Easily scale up or down
Google gives you the tools you need to run your business like a pro. Set up custom email, share files securely online, video chat from any device, and more.
Google Workspace provides a platform, a common ground, for all our internal teams and operations to collaboratively support our primary business goal, which is to deliver quality information to our readers quickly.
Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE
C37HCAQRVR7JTFK
C3AE76E7WATCTL9
C3C3RGUF9VW6LXE
C3D9LD4L736CALC
C3EQXV674DQ6PXP
C3G9M3JEHXM3XC7
C3GGR3H4TRHUD7L
C3LVUVC3LHKUEQK
C3PVGM4CHHPMWLE
C3QHQ763LWGTW4C
Even if you’re small, you want people to see you as a professional business. If you’re still growing, you need the building blocks to get you where you want to be. I’ve learned so much about business through Google Workspace—I can’t imagine working without it.
(Email us for more codes)

error: Content is protected !!