What are some coding anti-patterns that can easily slip through code reviews?
Programmers are a notoriously irritable bunch. We’re constantly getting in arguments with each other about the best way to do things. This is largely because there is no one “right” way to code – it’s more of an art than a science. However, there are some coding practices that are universally agreed to be bad form. These are known as “coding anti-patterns,” and they can easily slip through code reviews if you’re not careful.
One common coding anti-pattern is “spaghetti code.” This is code that is so tangled and convoluted that it’s impossible to follow. It’s the software equivalent of a bowl of spaghetti – a jumbled mess that you can’t make heads or tails of. Spaghetti code can be very difficult to debug and maintain, so it’s best to avoid it if at all possible.
Another coding anti-pattern is “copy-and-paste programming.” This is when a programmer takes some existing code, copies it, and then modifies it slightly to suit their needs. This might seem like a quick and easy way to get the job done, but it often leads to duplicated code that is hard to keep track of. It also makes it more difficult to make global changes, since you have to remember to change every instance of the duplicated code. Copy-and-paste programming might be tempting, but it’s usually a bad idea in the long run.
These are just a few of the many coding anti-patterns that can easily slip through code reviews. So next time you’re doing a review, keep an eye out for them – and try not to let them slipping through!
Below is an aggregated list of some coding anti-patterns that can easily slip through code reviews.
Comments: We all want to write meaningful comments to explain our code, but what if someone writes 4 paragraphs of comments explaining exactly what a piece of code does? This will have no problem passing through the code review, but it creates frustration for developers who need to maintain the code because every time I need to change a piece of the code, well I have to go through the 4 paragraphs and maybe rewrite the whole thing, so screw it, I’m not touching that code.
SRP: We want out code to respect the Single Responsibility Principle, we want developers to write small unit of logic that can be easily testable, but what happens when you write too many units? This will have no problem passing the code review, and if someone asks, you can just tell them you wrote the code to be easily testable but then once you go over a certain threshold, it becomes frustrating to jump between 20 methods, in 10 classes just to do a simple task. It become the real spaghetti code.
SRP is a principle, not a pattern. From my experience, DRY should guide one to OCP and OCP to SRP.
Indifferent Architecture: You like a framework, so you use it for you next project and you don’t think much about it. You put all the Controllers in the Controllers folder, all the Services in the Service folder, all the Helpers in the Helpers folder and because frameworks (Rails, Laravel..) operates with a certain level of magic, the simple act of putting your Model in the Models folder will give you a certain level of assistance that you will love… This will have no problem slipping through the code review because guess what, you’re following the framework’s guidelines, but fast forward a few months and you end up with this monolith that we all like to hate and then your developers start hating on monoliths and want to go micro services… The real issue is not the monolith, the real issue was the lack of design and architecture.
The biggest anti-pattern that will slip through code reviews very easily is the singleton pattern. It is an anti-pattern for two reasons:
What is unique today may be duplicate tomorrow: the classic case here is that 20 years ago we used to have one screen per workstation, today two or even three and four screens are increasingly common. This means that if your development environment uses a singleton for the screen now you are in trouble!
Even if you really have just one (say, a configuration file), the implementations flying around are absolutely horrific 99.99% of the time
Right, so, why is the mainstream implementation horrific? Here is what people will generally do: because the pattern says that there must be only one instance of a class, they will hide the constructor and instead have a static method called “getInstance” or something similar to create the class and reuse it across the board.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
That is the wrong way to go about it. What you should be doing is this instead:
Make the entire singleton class private
Have a normally allocatable class made public
In the public class’ implementation (which has to reside in the same file) create the private class as required (maybe as a static field! That is completely fine)
Use the public class
This is how you should do a singleton, but that is not what you see around. The net result of the common implementation is a hidden dependency on the singleton, which then means a lot of stuff cannot be tested properly without bringing the singleton in (so you can’t, for example, easily mock it).
Please stop doing singletons or, if you can’t, please do get them right.
Code reviews are really important. However, without a good set of coding standards, they can often become “this is my preference”.
Here’s my suggestion on how to avoid anti-patterns slipping through code reviews:
Read through Martin Fowler’s book “Refactoring”.
As a team, figure out what people think are anti-patterns.
Agree on a list. Define these anti-patterns in your coding standards.
Make sure everyone reads the coding standards, and can access it easily.
Then, you have given one another permission to call each other out when that class gets too large, or the method gets too long, or the method has too many parameters.
Bad Code:
Lots of comments
Meaningless names
long methods
methods that does many things
code that is hard to write unit test for
code that doesn’t have unit test
code that is tightly coupled to other code
code that isn’t S.O.L.I.D.
clever code
unreadable code.
Good Code:
Code that makes sense to another program or your future self in 6 months.
Statement, Methods and object has a single responsibility
The bad code has correct logic, but without comments leaves the reader guessing at the meaning.
The idea is to exit the code block as soon as you can. A few bonuses arise from this pattern:
Your code is likely more focused on the purpose of the block. Better at avoiding a kind of “run-on sentence” type of programming.
Reduced nesting. The same exact code can be written where the complicated code is within a nested bracket given a condition, but this helps keep your more complicated code at the tail end instead of nested near the top of a function.
Helpful to reinforce the fact that validation and parameter checking should be done first. You get used to it and functions start to look weird if they don’t validate input parameters.
Much easier for others to debug your code. Most of the validation is near the top. Less mental brainpower needed because the code is a bit more readable.
Personally, I really like how it makes my code look like block paragraphs. It makes it easy to skim and read quickly.
One function/file to rule them all. This is common in C/C++ for programmers who are still in the early stages of learning how to organize code. They will start filling either a single function (e.g. “main”) or at least a single file with their entire project’s code. This is not a bad way to start a project; I still do this myself. The problem comes when the programmer fails to realize that the code is becoming too large for the most basic organization strategy and keeps filling one container with all their code.
Too classy: Every single object gets its own class with constructors and methods for things which will never actually be needed. This is a textbook example of a programmer who has read a textbook on OOP but hasn’t been shown what good OOP code looks like.
The god object: There is one class with one instantiation which has its fingers in every single part of the program. It manages memory, maintains logs, synchronizes threads, and sends the manager his TPS reports for the day. Basically this is an OOP version of example 1 above, but is something you might still see in poorly maintained code.
Balkanization: The number of classes, files, and folders in your project is directly proportional to the number of developers, specifically because they do not cooperate on the same code and have balkanized the code base into a piece for each developer. This is a behavioral sink for software development in response to poor job security. What better way to secure your position in a company than for you to be the only person in the entire company who understands your code, and what better way to be the only person who understands your code than to be the only person who reads it?
OOP: Object orientation is almost always a stopgap measure to stop bad programmers from doing too much damage to a large code base. Given competent programmers, functional/procedural generic programming with lean data types is more scalable than OOP for the vast majority of projects. This is well illustrated by many C++ projects, where template programming is the actual backbone of the project with classes serving as a light layer of icing on the cake.
Fire and forget: How many times have you personally stumbled onto code that you yourself wrote not too long ago only to realize that you don’t understand how it works anymore? It happens to most programmers often enough that they resent having to edit old code. This can be remedied by explicitly writing down detailed documentation in the comments of your own code with the idea of communicating the actual purpose and design of your code for not just a stranger, but yourself in the future.
Review fewer than 200-400 lines of code (LOC) at a time: More then 400 LOC will demand more time, and will demoralise the reviewer who will know before hand that this task will take him an enormous amount of time.
Aim for an inspection rate of less than 300-500 LOC/hour: It is preferable to review less LOC but to look for situations such as bugs, possible security holes, possible optimisation failures and even possible design or architecture flaws.
Take enough time for a proper, slow review, but not more than 60-90 minutes: As it is a task that requires attention to detail, the ability to concentrate will drastically decrease the longer it takes the task to complete. From personal experience, after 60 minutes of effective code review, or you take a break (go for a coffee, get up from the chair and do some stretching, read an article, etc.), or you start being complacent with the code on sensitive matters such as security issues, optimisation, and scalability.
Authors should annotate source code before the review begins: It is important for the author to inform colleagues which files should be reviewed, preventing previously reviewed code from being validated again.
Establish quantifiable goals for code review and capture metrics so you can improve your processes: it is important that the management team has a way of quantifying whether the code review process is effective, such as accounting for the number of bugs reported by the client.
Checklists substantially improve results for both authors and reviewer: What to review? Without a list, each engineer can search for something in particular and leave forgotten other important points.
Verify that defects are actually fixed! It isn’t enough for a reviewer to indicate where the faults are or to suggest improvements. And it’s not a matter of trusting colleagues. It’s important to validate that, in fact, the changes where well implemented.
Managers must foster a good code review culture in which finding defects is viewedpositively. It is necessary to avoid the culture of “why you didn’t write it well in the first time?”. It’s important that zero bugs are found in production. The development and revision stage is where they are to be found. It is important to have room for an engineer to make a mistake. Only then can you learn something new.
Beware the “Big Brother” effect: Similar to point 8, but from the engineer’s perspective. It is important to be aware that the suggestions or bugs reported in code reviews are quantifiable. This data should serve the managers to see if the process is working or if an engineer is in particular difficulty. But should never be used for performance evaluations.
The Ego Effect: Do at least some code review, even if you don’t have time to review it all: Knowing that our code will be peer reviewed alerts us to be more cautious in what we write.
Lightweight-style code reviews are efficient, practical, and effective at finding bugs: It’s not necessary to enter in the procedure described by IBM 30 years ago, where 5-10 people would close themselves for periodic meetings with code impressions and scribble each line of code. Using tools like Git, you can participate in the code review process, write and associate comments with specific lines, discuss solutions through asynchronous messages with the author, etc.
During the last 6-7 years I’ve evaluated various code review tools, including:
Atlassian Crucible (SVN, CVS and Perforce)
Google Gerrit (for Git)
Facebook Phabricator Differential (Git, Hg, SVN)
SmartBear Code Collaborator (supports pretty much anything)
Bitbucket code comments
Github code comments
At some point I’ve also just manually reviewed patches which were e-mailed after each commit/push.
I’ve tried many variations of the code review process:
pre-commit vs. post-commit
collecting various metrics & continuously trying to optimize the process vs. keeping it as simple as possible
making code review required for every line vs. letting developers to decide what to review
using checklists vs. relying on developers’ experience-based intuition
Based on my experience with the code review process itself and the tools mentioned above, within the context of a small software company, I would make the following three points about code reviews:
Code reviews are very useful and should be conducted even in software which may not be very “mission critical”. The list of benefits is too long to discuss here in detail, but short version: supplementing testing/QA by ensuring quality and reducing rework, sharing knowledge about code, architecture and best practices, ensuring consistency, increasing “bus count”. It’s well worth the price of 10-20% of each developer’s time.
Code reviews shouldn’t require use of a complex tool (some of which require maintenance by their own) or a time-consuming process. Preferably, no external tool at all.
Code reviews should be natural part of development process of each and every feature.
Based on those points, I would recommend the following process & tools:
Use Bitbucket or Github for your source control
Use hgflow/gitflow (or similar) process for your product development
The author creates Pull Request for a feature branch when it’s ready for review. The author describes the Pull Request to the reviewer either in PR comments (with prose, diagrams etc) or directly face-to-face.
The reviewer reviews the Pull Request in Bitbucket/Github. A discussion can be had as Github/Bitbucket comments on PR level, on code level, face-to-face or combining all of those.
When the review is done, feature branch is merged in.
Every feature goes through the same process
So, my recommended tools are the same you should be using for your source code control:
Bitbucket Pull Requests
Github Pull Requests
Atlassian Stash Pull Requests (if you need to keep the code in-house)
What are some checks you always do on your code before you submit it for code review?
Unit tests are above the minimum threshold
Consistent naming convention with rest of codebase
No duplication of functionality
Properly linted/formatted code
Code Review Checklist :
Logic : Whether your logic is correct according to the use cases?
Performance : Check if there is a better approach/algorithm to solve the use case?
Testing : Whether unit tests [3]have been written? Do they cover all the scenarios and edge cases? Whether manual feature tests/ integration tests[4] have been performed? ( I usually omit the integration tests to be written at the time of code-review, I think it’s quite early. I am fine if the changes have been tested in a local stack )
SOR : I call this separation of responsibility. Is there necessary control abstraction[5] in your low level design? How modular is your codebase? Is there a DAO layer before the database? If there is a client layer? Is there a manager layer? How have you handled exceptions? Who is taking care of logging? How generic can their methods be? What kind of methods should they expose and what responsibility should they own at each level? Probably, this is the best place to inject your knowledge of Design Patterns[6]. Also, this component decides how generic[7], scalable[8] and extensible[9] your system can be.
Readability : Short and descriptive variable/method names. Strong use of standard verbiage without any grammatical mistakes. Method size kept small. Proper naming convention throughout the package be it camel case[10]or snake case[11]. Consistent naming of variables. Do not refer the same entity differently at different places in your code, avoid unnecessary confusion. Define scope[12] of every class/method/variable and make judgements of adding a new class/method thinking of who is going to use it? and who is not going to use it?
Automation : If there are few lines of code being written at multiple places, move them to a method/utility. Avoid redundancy. Make the best use of reusability[13].
Documentation : Draft the HLD/LLD over a wiki or a document. The key design decisions, the Proof-of-concepts[14], the reviews/suggestions by senior developers should always be consolidated at one single place. Although this point is not relevant for all the code-reviews but for the key implementation reviews, this serves as a recipe for the reviewer. Apart from these high level docs, make sure that you have javadocs/scaladocs[15] for all the public methods. Avoid comments as much as possible, make your code self explanatory.
Best Practices : Read the manuals/ articles/ research papers. ( very few scenarios ) of the frameworks consumed. Be an ardent visitor of Stack Overflow[16] and check for the best ways to implement a certain complex usecase and how the code abides by it.
I spend quite a bit of time reviewing code and some of the common problems I found are :-
Over architecture by creating lot of superficial interfaces
Premature optimization of code
Reinventing the wheel when something like this exists in open source or inside the codebase already.
coming up with a totally new pattern for doing things when such problem is already solved in code.
Trying hard to fit a design pattern into a code where its not needed (just because you read it few days back)
Very long variable names
Typos in variable names
No comments(I am ok with this if code is written like a book but sometimes you are writing something complex like an algorithm that wont make sense to someone newbie and leaving a one liner comment about your decision process would help people why you are doing it).
Lack of enough tests in new code.
No tests or borderline tests when mutating legacy code. Also no effort to make legacy code better.
Wrong technology choice
Introducing SPOF in architecture
Typical database schema issues
Missing indexes
Typos, using java conventions for db field names or mismatched conventions with existing field names
very long column names
Wrong datatype like strings for date or varchar(1) for boolean
Too bigger or too limited field lengths
Where can I have my code reviewed?
Since you’re looking to review your whole project, Stack Overflow , the Code Review Stack Exchange, and programming subreddits won’t work.
Here are some options that will help a non-technical person such as yourself:
Freelancers and Agencies
Consider hiring a more experienced freelancer or agency to review your outsourced team’s code. You might even be able to hire a local software developer to review their work.
Development Agencies – There are thousands of software development agencies around the world that offer code review. Similar to hiring freelancers, they start at around $10/hour. See this Quora question for tips for choosing a software development company. Be sure to read through the checklist for vetting and hiring them.
On-demand Code Review
If you want a professional option then look at PullRequest.com. It’s a platform for on-demand code review that works with GitHub, Bitbucket, or GitLab to provide code quality feedback from vetted reviewers. They can review your project for bugs, security issues, code maintainability, and code quality issues.
Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada.
Looking to save a few cents per litre on gas in the USA or Canada? Here are a few tips and tricks that can help you do just that.
First, make sure you’re using the gas rewards program at your local gas station. By using a gas rewards card, you can earn points that can be redeemed for discounts at the pump. Additionally, many gas stations offer coupons and promotions that can save you money on gas purchases. Be sure to check the gas station’s website or app for any current offers.
Second, consider carpooling or taking public transportation when possible. This will help you save on gas costs and may even improve your fuel economy. If you must drive, try to consolidate your errands into one trip instead of making multiple trips. This will also help you save on gas.
Finally, keep your car well-maintained. A well-tuned engine can improve your fuel economy by up to 4%. Additionally, properly inflated tires can also improve your fuel economy by up to 3%. By following these simple tips, you can easily save up to 30 cents per litre on gas in the USA and Canada.
Gas is getting very expensive and we are trying to help consumers save on Gas by providing you daily tricks to help you save up to 30 cents per litre on Gas in USA and Canada.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
You can discover a great deal of rebate gift vouchers for gas on the web. These will work all things considered Shell, Gulf, and Mobil stations. They will spare a couple of dollars for each buy, yet that can add up to enormous reserve funds on a yearly premise.
The Optimum program is one of the better value points programs. And the points convert to cash discounts on stuff you buy every day, rather than air travel and catalogues full of slightly aged-out consumer trinkets that you don’t really need.
If you are a Costco member and also optimum member, which option gives you the most savings?
From a quick google of prices in my area it looks like the average price is around $2/L and Costco is currently around $1.75. The value of the Optimum program is more that you can keep your eye out for specials and earn points which can then be put toward gas purchases. But the basic earnings of 10 pts/litre (1¢ equivalent) and redeem up to 4,000 pts ($4 equivalent) aren’t anywhere near 25¢/litre. If you don’t mind the lines 😉
If you have one near, try to fuel up at Mobil gas instead of Esso. Esso provides 15 points per liter, Mobil gas provides 35 points per liter.
I used to have a work vehicle that I filled with Mobil gas, on the company credit card, got approx. 30 dollars of free groceries from Loblaws every week because of this practice.
TD , CIBC and Scotia all have one right now. It’s 10% cashback on purchases up to $2000 in the first three months.
I use CIBC Dividend card not only do I save on gas (.03 off a litre till you get 300l then .10 off one time and then it resets) but earn Cashback everywhere. Last yr I earned about 580 Cashback this yr I’m over 200 right now.
I bank with CIBC as I use my card I pay it off same day so never paid interest.
Note that your max yearly cash back for the 4% (gas and groceries), 2% and 1.5% categories is $800 (4% of $20,000). After $20,000 yearly spend, the 4% cash back ends, and is replaced with 0.5% on all purchases. In other words, if you spend on any of the other categories, you won’t get the $800, because you’ll hit $20,000 total spend before you hit $20,000 on gas and groceries.
I got a Rogers World Elite card, and use it for all purchases except gas and groceries, for 1.5% cash back. I use the cibc dividend card only for gas and groceries for 4% cash back.
CAA members save 3 cents per L at all shell stations. And they use air miles.
4. Drive Sensibly
Quick quickening and short explosions of speed can cost you a ton with regards to gas. Slow and reliable movement is constantly favored over aimless driving. Land Rovers, for example, can show signs of improvement mileage utilizing journey control. Practice smooth driving and you’ll certainly set aside some cash with improved gas mileage.
5. Time Your Trips to the Gas Station
Gas costs can ascend on Thursdays because of high odds of end of the week travel. To keep away from these expanded costs, top off the tank before Thursday or on significant occasions.
6. Utilize Your Smartphone to Find the Cheapest Gas Station
Your cell phone is for something other than perusing Facebook and Instagram. Use it to locate the least expensive gas in your general vicinity. Applications like AAA Triptik and GasBuddy will assist you with finding the closest and least expensive fuel. gas
Something I’ve noticed with the gas saving apps… many times the prices are wrong. I show up at a station, and end up refueling anyway, and then a few minutes later I see it has been put back to the “fake low price”.
I think owners are gaming the system in order to draw people in.
7. Get a Gas Rewards Card
Too few have a gas rewards card. It resembles not getting a prizes plan regardless of whether you’re a long standing customer. There are a great deal of sites out there that can acquaint you with bargains for fuel rewards. You can get free gas on the off chance that you gather enough focuses, so why not? Pursue that prizes card!
8. Try not to Leave Your Engine Idling for Very Long
Close off your motor in case you’re not going anyplace. You’re squandering gas, and you’re dirtying nature.
A few service stations charge a premium on the off chance that you pay with Visas, however some give you limits on them. Discover and use what you can to set aside cash.
10. Keep up Your Car
Keeping your vehicle kept up is the manner by which to get a good deal on gas over the long haul. In the event that you have a clunker or a vehicle that you treat severely, it will have awful mileage. Simply keeping your tires expanded can improve your gas mileage by 3.3%. So focus on your support.
11. Be Picky
Corner store
Quit heading off to the corner store near your home or the interstate so you can get it over with. This can cost you almost 15 pennies more for every gallon. Discover a corner store that has modest costs and stick with it.
11. Try not to Overload Your Car
over-burden vehicle
This is an easy decision, however it needs strengthening. In case you’re hauling around as long as you can remember in your vehicle, quit doing it. Clearly the heavier your vehicle gets the more gas it will require to cover a similar separation. Just keep the minimum necessities in your vehicle. Leave the rest at home.
This application gets you 40/cents per gallon money back at several gas stations. Average individuals are getting paid hundreds, and expert drivers are getting thousands with this application that gets you 40cents money back on each gallon of gas!”
12. Drive more slowly and think ahead and use motor braking.
The amount of time you win for speeding is so little compared to the amount of fuel you are going to save.
13. Plan out grocery trips for longer times. Instead of going a few times a week to pick up a couple things, go once every 2-3 weeks with a list of everything you’ll need for that timeframe.
14. Drive the smallest stick shift diesel available. Press in your clutch on downhills, especially long ones on the freeway. Play a game where you try to put as little foot on the gas.
15. Buy a more fuel efficient car. That makes the biggest difference.
16. Drive less. Combine trips. Carpool. Walk. Bicycle. Take public transit.
Do things (including many types of work) that can be done over a wire, over that wire, instead of driving to it. Drive a more fuel-efficient vehicle. If people would bother to think about when all of these might be possible, they would find that they generally are possible.
I have a gas-powered SUV and paid nearly $60 to fill its tank last week. I no longer drive around town just for the hell of it—I have to be strategic. Instead of driving to Target or Walmart for household goods and groceries, I order these necessities for delivery via Amazon. If I do need to drive to one part of town, I hit all the shops in that area at once and act as if I won’t be back for weeks. Ultimately, I am driving with intent—every trip has a purpose.
17. Tyres
Find the Tyre pressure placard in your car and make sure your tyres are pumped up to the correct pressure.
Try and do this when you have driven the car for less than 5 minutes. hot air expands and will give a false reading if the tyres are hot. do it when it is cold. Do NOT pump them up to the max pressure listed on the side of the tyre.
Keeping your tire pressure perfect is not only a safety measure but also helps in Saving Fuel as the right amount of tire pressure will reduce the friction with the road.
Tips- Tire pressure check is free on every petrol pump, but it does not mean it’s useless. Make Use of It every time you can.
Actually, over-inflate your tires for best gas mileage.
The number on your door is the recommended pressure. The max pressure on the tire is the “do not exceed” number. Something in between is fine.
The drawback is that you’re going to wear out the middle of the tire quicker than the sides (because it’ll dome a bit from the higher pressure if you don’t have enough weight to force it flatter again). This might be noticeable after years.
But tires aren’t that expensive, and fuel is. You’ll pay off the small reduction in tire life with the bigger reduction in fuel use (and, especially if you’re in a pinch today, you could kind of consider it a deferred expense). And, it’s a small change you can always taper off again later.
A side effect will be a slightly harsher ride, and slightly less grip (not great for the winter).
Roughly speaking, 50% of your gas usage comes from rolling resistance in the tires, the other 50% from air resistance. At city speeds, tires and starts/stops make up most of your gas cost. Around 2/3, 3/4 of highway speeds is where air resistance takes over. Above 60mph/100kmph is where you really start to gobble fuel disproportionately (10% faster uses 33% more fuel).
Avoid where you have to use the brakes. Any time you use the brakes you’re wasting all the energy you had to put into accelerating the vehicle. In stop/go traffic, this is most of your fuel use. So instead of racing forward to fill gaps and then have to stop, just drive half the speed, steadily. If you see the light is red, get off the gas and coast, don’t accelerate up to it and then hit the gas. Careful you’re not blocking turning lanes by driving slower, just because you’re stopping at the lights doesn’t mean everyone behind you is.
In short… there’s no free lunch here. If there were ways to save money on gas, those would already be things we’re doing. All the little tips and tricks might add up to 20%, which is like… where gas prices were a month ago.
The only easy way to save money on gas is to drive less.
18. Lose weight.
Get rid of any excess stuff you have in your car. Every extra kilo costs money to haul around. Same goes for aerodynamics. those roof racks you never use? take them off!
19. Change your driving style.
So many people these days drive aggressively. stamping your foot to the floor whenever you accelerate is both unnecessary and burns far more fuel than using 50 or 75% throttle. there are other throttle positions than 100%!
Instead of speeding up to close any gap in front of you. leave it there and coast a bit. someone may change lanes, who cares? watch ahead, if cars start braking ahead, take your foot off the throttle early and coast a bit instead of riding the car in front of you constantly braking and accelerating.
20. Drive smoothly. it’s amazing how big of a difference driving style makes to fuel consumption.
21. Engine Air Filter
Make sure the engine air filter is clean, dirty air filters make for poor fuel consumption.
22. Premium Fuels
Only go for premium fuels if the car company suggests you to. Otherwise, you are just increasing the cost of fuel and increasing the overall running cost of your car. Well, it’s a myth that premium fuel will help you save more fuel and increase the mileage of your car It’s False.
Tips- Buy Normal Fuel, Premium fuel burns more and adds more price and Same less Fuel.
23. Cruise Control
Using cruise control on the highway will provide a smooth ride with a little bit of constant acceleration. Ultimately it will add to your mileage and save you a lot of fuel.
24. Race Peddle Control
If you keep a soft foot on the peddle you will always Save lots of Fuel. When we use a hard foot car consumes the maximum amount of fuel that needs to generate the power we want.
Tips – After attaining a speed of 70-80 try losing your foot maintaining the race paddle at the fixed position where the acceleration is almost zero.
25. Keep RPM Low
Higher RPM means higher fuel consumption and Lower RPM helps in Saving Fuel providing a safe feeling to every passenger in the car.
Tips- Remember you can only create a very little difference in time if you drive fast keeping your speed and RPM high. But you can’t save more than 5 Min as per the traffic on the roads these days. Keep it Low to Save Fuel.
26. Save Fuel by Driving Smart
Driving consciously and safely will always help in maintaining the mileage of a car and Save Fuel. Avoiding unnecessary fast pickups and jackrabbit stops will always help in saving fuel.
Tips – Easy and Safe driving will help in Saving Fuel and driving safety.
27. Overlooked button on your car may help save on gas
The ‘Air Recirculating’ button on your A/C might cool off your car faster and save you a little gas. On most cars, trucks, and SUVs the air recirculation button is easily identifiable, with its representing symbol of a half-circle inside of the outline of a vehicle. Many people say they’re aware of the button, but are not sure when it should be on or off.
Another function of this climate control system is to stop pollution and exhaust fumes from entering the vehicle. Having this button activated will also help to greatly reduce pollen when driving, which is a big positive if you suffer from outdoor allergens.
“If you don’t switch the air recirculation button on, then your car’s air conditioning will be constantly cooling warm air from outside your vehicle, and will have to work much harder, putting more stress on the blower and air compressor,” said Ruhl.
Another benefit to using the air recirculation feature is the money you could save on gas.
“Cars are usually more fuel-efficient when the air conditioner is set to recirculate interior air. This is because keeping the same air cool takes less energy than continuously cooling hot air from outside,” said Ruhl.
While the recirculation button is great for the summer months, it may be best to avoid it in the winter or when your windows become foggy.
“Anytime you’re using defrost, it’s best to not have that button on. Also, using it while you have your heater on isn’t going to do anything for you vehicle,” said Ruhl.
28. Your driving habits are a huge factor. Very slow accelerations and decelerations help dramatically. Coasting to that upcoming red light instead of keeping on the gas and braking. Chilling at 60 on cruise in the right lane vs accelerating between 65 and 75 passing people in the left. Things like that.
Also for most cars, above 55 its better to keep your windows up and use ac, below 55 better to do windows down and ac off. Varys by model due to aerodynamics, but 55 is good enough to give you an idea.
29. Don’t hard accelerate
Try to slow down in a more gentle manner if your lucky the light will go green before you stop
Be consistent with your speed if it’s 30 mph zone try not to go faster than that or get distracted to the point where your car starts slowing down
If it’s hot out keep the windows down, AC in older cars can make the car consume more gas, not sure how these newer cars are doing with that.
Make sure your tires have good tread, bald tires can spin out more and if the wear is uneven that can cause additional issues.
30. If you drive a SUV trade it for a Toyota Corolla
Scientifically proven that the wavelength of reflections on the beige tone is in the optimal bandwidth to reduce optical resistance, thus better fuel efficiency.
Check your engine air filter. Make sure it is clean, replace if necessary. Make sure your tires are filled to the recommended pressure.
Also change spark plugs at their recommended service life.
Also, if you car is over 160k km, good idea to replace the O2 sensors as they get slow. Replaced all four sensors in my car and my mileage went from 9.x L/100 km to the high 7’s.
A Prius, or any type of gas/electric hybrid, or a smaller vehicle, like a Toyota Corolla, Honda Civic, Chevy Malibu, Ford Focus, VW GTI or Rabbit.
But there is a direct correlation between How you drive, regardless of What you drive. I have a 1998 Chevy Silverado, with a 5.7L (350 cu in) V8, and I can get great MPG’s when I drive it sensibly, and don’t have a ton of unnecessary stuff/gear in the back, or even back seat.
Make sure the tires are set to the appropriate PSI. Always set them to the pressure setting on the inside of the drivers door. On that subject, changing the tire size or wheel size and sidewall thickness will also have a negative effect on MPG.
You would be surprised how much stuff a lot of people have laying in the back of their car, and if they would simply clean it out, they could save money.
Also, keeping your vehicle tuned up and the oil changed per the owners manual will also help keep the MPG high.
Not speeding away from every stop sign or stop light will also help.
Keeping your speed down on the freeway will help.
However, opting to roll the windows down instead of using the A/C to keep cool will actually create drag on the car and lower the efficiency. So crank the heat sucker up to high. Not only with rolling the windows up save fuel, it will also reduce noise and reduce fatigue, so you can drive more comfortably.
What burns more gas, accelerating as fast as possible to 60 mph (e.g. 10 seconds) or accelerating slowly (e.g. 30 seconds)?
Not long ago I had a ’16 Subaru WRX. Fast, turbo-charged all-wheel-drive car. Terrible gas mileage. It’s also heavy, roughly two tons.
One day, I did an experiment on the city streets. Rather than accelerate in a controlled manner and drive at a consistent pace, I put the gas pedal all the way down to reach about 15 mph over the speed limit, and then I put the car in neutral, and let it coast. The car would coast a full mile before it was going slow enough (5 to 10 mph below the speed limit) that I had to put it in gear and goose the throttle again full blast and bring it up to 15 mph over the speed limit.
In this simple test, the overall gas mileage skyrocketed. It went from about 25 mpg to more like 40 mpg. And yet I was ultimately going the speed limit on average, and kicking off my trips very quickly.
This led me to a realization. Yes, holding that gas pedal all the way down uses up a lot of gas. But what it also does is important: it brings you up to speed. What also uses up a lot of gas is simply cruising—not coasting, cruising. That’s where most of your gas is being spent, because your engine is expending gas, quite a bit of it, actually, just to keep up and maintain velocity.
And when you accelerate slowly, you’re effectively cruising, without being up to speed, yet with a little extra gas. That’s wasteful, because you’re going slow and still using up plenty of gas. Is it more wasteful than the explosion of rushing your car forward immediately? Actually, perhaps so, if you’re taking too long to do it.
Remember, just turning that engine using fuel uses up fuel. Accelerating quickly brings the car up to speed quickly—which brings the engine’s productivity to the maximum output quickly—which is not an infinite dump of fuel, it is limited to what the fuel line and injector and cylinder can mix with air and compress, which is measurable, and it’s actually not as far off from cruising fuel as people seem to think. Source: Quora
1️⃣ Only buy or fill up your car or truck in the early morning when the ground temperature is still cold. Remember that all service stations have their storage tanks buried below ground. The colder the ground the more dense the gasoline, when it gets warmer gasoline expands, so buying in the afternoon or in the evening….your gallon is not exactly a gallon. In the petroleum business, the specific gravity and the temperature of the gasoline, diesel and jet fuel, ethanol and other petroleum products plays an important role.
2️⃣ A 1-degree rise in temperature is a big deal for this business. But the service stations do not have temperature compensation at the pumps.
3️⃣ When you’re filling up do not squeeze the trigger of the nozzle to a fast mode If you look you will see that the trigger has three (3) stages: low, middle, and high. You should be pumping on low mode, thereby minimizing the vapors that are created while you are pumping. All hoses at the pump have a vapor return. If you are pumping on the fast rate, some of the liquid that goes to your tank becomes vapor. Those vapors are being sucked up and back into the underground storage tank so you’re getting less worth for your money.
4️⃣ One of the most important tips is to fill up when your gas tank is HALF FULL. The reason for this is the more gas you have in your tank the less air occupying its empty space. Gasoline evaporates faster than you can imagine. Gasoline storage tanks have an internal floating roof. This roof serves as zero clearance between the gas and the atmosphere, so it minimizes the evaporation. Unlike service stations, here where I work, every truck that we load is temperature compensated so that every gallon is actually the exact amount.
5️⃣ Another reminder, if there is a gasoline truck pumping into the storage tanks when you stop to buy gas, DO NOT fill up; most likely the gasoline is being stirred up as the gas is being delivered, and you might pick up some of the dirt that normally settles on the bottom.
6️⃣ Note: If the pump repeatedly shuts off early, it could be a sign of a problem with the vapor recovery system, such as a clogged carbon canister.”
1. First and foremost Maintain a steady speed. 2. Fill your tire pressure 1 or 2 psi more than the prescribed number. 3. Do not travel with your AC off, especially during long distance journey. With your AC off you will have to lower the car windows and if you are traveling at speed more than 60 miles per hour it is going to affect the aerodynamics of the car and this might affect the fuel consumption a bit. 4. Remove all unnecessary weight from the car. 5. Choose a well maintained road even if it is going to take you more time than a bad road. 6. Have your car checked with a mechanic before you travel.
Under 70mph and your windows up, your AC will use more energy than if the windows were down and the AC off. As your cruising speed increases, the aerodynamic drag on the car increases to the point where having the windows down creates a greater load on the engine than the AC does. This only applies to modern cars which are generally quite aerodynamic. Having the windows up or down doesn’t really make any difference to vintage cars. Remember though, AC takes more power than you might suppose so on a long hot journey, driving with the AC off will improve mpg. Taking the AC equipment off altogether will make an even bigger difference – as much as 10%.
Does cruising in a car save on gas? How?
Since cruising involves maintaining the vehicle at a constant velocity, it requires minimum efforts (Power) from the engine. The power required from the engine is used to nullify the declaration from frictional forces (air drag and road adhesion). Since less power is required from engine the ECU ensures minimum gas is used.
Can lowering your tailgate really save on gas?
No it’s a myth…in fact the now cancelled show MythBuster’s did an episode on it. Pretty legit test if I do say so, although if you have a truck with two gas tanks you could test it yourself as I have. The one thing that can help seems counterintuitive, which is add a little weight. Like around 100 pounds or so depending, and make sure it’s over or behind the rear axle in the bed. What this does is give the rear wheels a bit more traction and that increases your gass mileage a little. A trick I learned from my Grandpa as a curious little kid wondering why he always had a couple spares mounted to each side of the bed right up against the tailgate. Those old gas guzzlers need all the efficiency they could get.
Bonus: also works better in snow, ice, and slush…get some sand bags and throw them in the same spot behind the axle and you limit fishtailing/sliding in the winter. More weight than the hundred pounds, plus it has multiple uses. If you get stuck where the tires are spinning on the ice you can open up a sand bag and out the sand in front and behind the tire to help gain traction. Make sure to do both sides of the truck as you probably won’t have positraction. Lol…additionally if it’s not too cold you can pee on the ice around the tire. I have gotten many a people unstuck with a little sand and piss.
How can I save gas when driving long distances?
1. First and foremost Maintain a steady speed. 2. Fill your tire pressure 1 or 2 psi more than the prescribed number. 3. Do not travel with your AC off, especially during long distance journey. With your AC off you will have to lower the car windows and if you are traveling at speed more than 60 miles per hour it is going to affect the aerodynamics of the car and this might affect the fuel consumption a bit. 4. Remove all unnecessary weight from the car. 5. Choose a well maintained road even if it is going to take you more time than a bad road. 6. Have your car checked with a mechanic before you travel.
Hope these points might help you.
Can I keep driving on eco mode? How much does it save on gas?
Economy mode is useful on most conditions but be advised, that some engines need to be “ blown free” by using higher rpm snd full engine load in order to keep the exhaust/ turbo- system declogged. That applies especially to diesel- engines with egr- system. In “ grandfather”— drive mode only those will have need for extended overhaul way before resching estimated end of service- time. ( what absolutely nullifies all eventual gains from eco- mode
What are some ways to save on gas annually?
To save gas you should follow the instructions of the manufacturer of your car if your question refers to the gasoline that you spend to make your car run. If your question refers to the natural gas that you use at home to heat up food, water etc then the only recommendation is to watch for any leaks if you suspect that you are losing gas. Fixing those leaks by means of an experienced technician will resolve your problem. Coming back to your car, not over speeding, and not letting the engine on idle for long time in order to keep the air conditioner working or the heater in the Winter these are two important ways to reduce gasoline consumption.
Summary:
Looking to save a few cents per litre on gas? Here are a few tips and tricks that can help you do just that:
1. Check gas prices before you fill up. Many gas stations offer discounts for cash, so it’s worth checking beforehand to see if there’s a station nearby that offers a cheaper price.
2. Use coupons. Many gas stations offer coupons that can be used to save money at the pump. Simply present the coupon when you’re paying and you’ll automatically get a discount.
3. Shop around for gas cards. Some gas cards offer discounts of up to 5 cents per litre, so it’s worth doing some research to see if you could be saving even more money.
4. Drive less. This one is obvious, but the less you drive, the less gas you’ll need to purchase. So, if you can carpool, take public transportation, or walk/bike instead of driving, you’ll save yourself some money in the long run.
5. Keep your car well-maintained. A well-tuned engine can improve your fuel economy by up to 4%, so it’s worth getting your car checked out by a mechanic every
By following these tips, you can easily save money on gas without making major changes to your lifestyle.
Does getting a Tesla make financial sense in terms of cost savings on gas and maintenance?
If you looked at all the cars in the world and calculated which one had the lowest cost per mile transporting someone from Point A to Point B. It would probably not be a Tesla. If people used that criterion for buying a car, then there would be only one car in each class. People buy cars for lots of reasons. If you’re keeping the car for 5 years, some high-mileage hybrids will cost less (absent government subsidies) than a Tesla. Gas is cheap these days. Push it out 10 years or if gas prices go back up, the calculus is different. Your Tesla will outperform that high-mileage hybrid and be a lot more fun to drive. How much is that worth to you?
(more)
With rising prices, what are smart ways to save money or good alternatives like horse and carriage to save on gas?
This is my plan for tackling the current inflationary environment in the United States:
Limit discretionary driving. I have a gas-powered SUV and paid nearly $60 to fill its tank last week. I no longer drive around town just for the hell of it—I have to be strategic. Instead of driving to Target or Walmart for household goods and groceries, I order these necessities for delivery via Amazon. If I do need to drive to one part of town, I hit all the shops in that area at once and act as if I won’t be back for weeks. Ultimately, I am driving with intent—every trip has a purpose.
Meal substitution. In my area of the U.S., beef is less expensive than chicken. Thus, I substitute beef for chicken and prepare meals like spaghetti, burgers, and chili. Also, my cost of groceries has risen faster than the cost of a Chipotle burrito, for instance, so I sometimes eat a Chipotle burrito instead of eating at home.
Plan for higher utilities. My energy bill is much higher today than it was last year. Since I live in an apartment, each unit’s bill is decided by dividing the energy cost for the entire building by the number of occupied units. Thus, I have very little control over the cost of my monthly bill. I must prepare for this expense and not let it blindside me.
Limit unnecessary consumption. Now is not the time to be frivolous with money. All nonessential consumption (i.e., online shoe shopping, going to the movies, etc.) is essentially placed on hold.
Invest tactfully. With inflation running hot, the Federal Reserve likely hiking interest rates in the coming months, and macroeconomic and political uncertainty, the stock and crypto markets may fall further before rising once again. Having dry powder (i.e., cash) on hand to take advantage of the situation is not a bad idea. I’ve been building my cash position over the past couple of months, so I can buy assets when others are fearful and need/decide to sell. As a long-term investor, you want to buy into fear and weakness, and I believe we are in that environment.
How much money do you save on gas with a hybrid?
If you compare a small, light ICE vehicle, you won’t save anything but if you compare an ICE car of the same weight as an EV then you will save money, possibly as much as $10 every 200 miles.
How much money do you save on gas by paying cash instead of credit in the long-term?
Using a 10 cent per gal difference between cash & cc, that comes to about $28 extra per year to use my credit card for my mileage and average MPG. That’s about $2.33/month so not much at all. Then you need to take into account that I get 3% back using my credit card at the pump from my credit card rewards program. That comes to $29/year. Those were round number calculations I did though so we’ll just call it even.
Does cruise control actually save gas or is that a myth?
The cruise control itself does not save any gas compared to simply keeping your foot at the same position. However, what cruise control does tend to do, is influence the driving style of the human inside.
The whole point of the cruise control is that you don’t need to constantly control the throttle. And thus you will tend to want to avoid needing to do that while using it. At the most, you will want to disengage the cruise control, to reduce speed slowly when needed, and then re-engage when you can overtake.
The result is that you tend to start looking further ahead, a few cars further than the one directly in front of you. Coming up on a car, you will decide earlier if you can overtake, or if you lift the throttle. This is very positive for reducing fuel consumption.
Many drivers without cruise control will not lift until the last moment, and then often need to brake when they can’t overtake. This is disastrous for the fuel consumption.
There are some special situations where cruise control itself can help reducing fuel consumption. One of those is when using the highest gear at very low throttle. This tends to be the most fuel-efficient configuration, but with so little torque, it can be difficult to keep the speed constant. The cruise control can do that very well. If you can’t manage to drive comfortably at that speed yourself, but the cruise control can, then that is a case where the cruise control directly allows higher fuel efficiency.
Another is when your car doesn’t have a mid-console near your foot, and thus is it difficult to lean your foot against it, helping keep a steady position. In that case, driving without cruise control might lead to constant speed changes as well, and the cruise control could help smooth that. That will also improve fuel efficiency slightly.
But in general, anything the cruise control does, you can do as well… It’s is the driving style that improves fuel efficiency. Cruise control can stimulate a more relax driving style, and that helps. If you already were driving relaxed and smooth, then you’ll not notice any difference.
By improving public roads in order to minimize rolling resistance and enhance traction, how much money could be saved on gas consumption and avoidance of traffic accidents?
Patent 6,923,124 has a rolling surface that is 1000 times smoother than typical asphalt. This smooth rolling surface and engineered reverse sag allows steel wheels instead of energy wasting rubber tires. All oil can be avoided (saved) by switching to aerodynamic vehicles rolling on three more perfect rolling surfaces configured in a triangle. There is no reason a car should ever leave the normally traveled portion of the roadway. Designing in 3D means a vehicle can never come off the designated trajectory. Instead of a reactive suspension producing pitch, yaw and roll the guideway produces those motions with precision. This improved “road” (guideway) allows for 180 mph travel at a tiny fraction of the required energy. This in turn allows all transportation to be powered by a 7 foot wide s
(more)
If I drove 100 miles every day, how long would it take me to pay off my electric car with the money I save on gas?
Driver of 2014 Tesla Model S and builder of Honda del Sol EV conversion.3y
Ok, let’s get serious, and go about doing this the way a person would who’s really trying to save money. Two scenarios: * Aggressive scenario: Buy a used 2014 Nissan Leaf for $8,000. It will only have about 30,000 miles and a range around 85 miles. In my area, electricity will cost 2 cents per mile since our electricity is fairly cheap. Assume the gas car being replaced was getting 30 mpg, so its fuel cost is 11 cents per mile. You are commuting to work each day, 50 miles each way. You don’t have enough range to get home, but your employer offers free charging. (That can happen. My employer does.) Driving 100 miles per day, paying for half and getting half from your employer, will cost $1.00 per day, or $30 per month. The gas car would cost $11 per day or $330 per month. Savings is $300 per
(more)
What kind of car should I buy that saves on gas?
What’s the best car that will save on gas/maintain car value overtime?
Short answer: Toyota corolla or Honda civic
But there is a direct correlation between How you drive, regardless of What you drive. I have a 1998 Chevy Silverado, with a 5.7L (350 cu in) V8, and I can get great MPG’s when I drive it sensibly, and don’t have a ton of unnecessary stuff/gear in the back, or even back seat.
Make sure the tires are set to the appropriate PSI. Always set them to the pressure setting on the inside of the drivers door. On that subject, changing the tire size or wheel size and sidewall thickness will also have a negative effect on MPG.
You would be surprised how much stuff a lot of people have laying in the back of their car, and if they would simply clean it out, they could save money.
Also, keeping your vehicle tuned up and the oil changed per the owners manual will also help keep the MPG high.
Not speeding away from every stop sign or stop light will also help.
Keeping your speed down on the freeway will help.
However, opting to roll the windows down instead of using the A/C to keep cool will actually create drag on the car and lower the efficiency. So crank the heat sucker up to high. Not only with rolling the windows up save fuel, it will also reduce noise and reduce fatigue, so you can drive more comfortably.
When I have little gas left in my car, is it better to drive fast or slow so that I can get the best distance out of the amount of gas left?
Look at all the other mileage techniques that other people have formulated over the years, they all apply. Basically:
Accelerate firmly from a stop. Too slowly, and you waste time in low gears, which are inefficient. Too fast, your engine is burning more fuel than it needs to. 8 – 10 seconds to 40mph is good, get a feel for your car, maybe get a OBD sensor to monitor fuel usage directly (any car after 1990s has one, I think)
Try to get to the top gear, and at lowest RPM. Engine spins the slowest for maximum distance. A little slower is usually ok, especially if the car has bad drag coefficients, or there’s a lot of stops. Accelerating to top gear only to brake for a stop light is a waste of fuel.
Modern cars cut fuel when engine braking. Try to roll as far/long as possible without using the brakes and avoid idling. Braking early, then rolling is better than coming to a complete stop since idling is just a constant drain, and if the light goes green, you save kinetic energy. You can usually feel when the ECU starts fuel delivery again when the engine braking lessens, though forcing downshifts is not recommended due to
Increased wear on a transmission which is more expensive than brake replacement
the spurt of fuel needed to kick the RPMs up. Though it may be needed if you need every last drop. Try downshifting early, if needed.
Try not to use neutral when coasting since the engine is still running. Also, its generally illegal
4. coast up hill, accelerate downhill (where possible). Don’t roll down the hill backwards.
5. If in a Hybrid, try to coast at 0 throttle and 0 regen. Regen, while nice, is fundamentally inefficient due to multiple transformations of energy. At 0 throttle, the engine is off, and no fuel is used. Hybrids generally have low drag, so can go pretty far on flat ground.
6. Tailgating can save some fuel, but it isn’t really safe. A few car lengths of distance can still yield a bit, though don’t overspeed to do so.
7. Turn engine off if you’re gonna be stopped for long periods of time.
Is driving slow up on a hill(consume less fuel but takes longer) or fast(consume more fuel but takes less time) better choice for fuel saving ? The hill would be 1 km for reference.
The answer is matching the proper rev range to power to be most efficient.
The real world answer is that if it’s just a kilometer the difference is negligible
Engines are most efficient usually somewhere at the 1/3 to half of the RPM range and at decent load. So if you need to floor it to get on the hill on current gear, downshift, else just press pedal slightly stronger and keep the speed.
As long as you can engine brake downhill the speed doesn’t really matter, just keep the usual traffic speed.
In general accelerating just to slow down later is worse than just keeping steady pace, especially if there are brakes involved.
When accelerating in a car does it use more gasoline to accelerate rapidly as opposed to slowly?
A car is most efficient when in its highest gear. If you accelerate too slowly, you will spend too much time in the lower gears before you get into the highest gear. Therefore, accelerating excessively slowly is not the most economical technique. Thus, advise to accelerate slowly to save fuel is WRONG!
A few decades ago, BMW did some tests to determine the most economical way to drive their cars. Although that was before fuel injection became common, I’m sure that the rules have not changed very much. They found that for their cars, the most economical technique was to accelerate with a heavy foot (2/3 to 3/4 throttle) but upshift at only 2000 rpm. That works well for a manual transmission, but is generally impossible with an automatic transmission because it will upshift at a considerably higher speed if you use a heavy foot and, just as bad, delay locking the torque converter. So, with an automatic transmission, the most economical technique is probably to accelerate at a moderate rate, i.e., not too fast and not too slowly.
The rules may have changed slightly because of modern electronic fuel injection systems which control the fuel mixture better. They are less likely to deliver an excessively rich mixture at wide throttle openings which occur with a very heavy foot.
With an Otto-cycle engine (4-stroke, spark ignition), the throttle valve is an important source of inefficiency. The power required to suck in air against the vacuum created by the throttle valve wastes fuel. For that reason, an Otto-cycle engine is most efficient when the throttle valve is wide open, or nearly so, provided that the fuel system does not provide an excessively riche mixture under those conditions. That’s why it is most efficient to use a heavy foot and upshift at low speeds, but not at such low speeds that the engine knocks or doesn’t run smoothly since that could cause damage.
The most inefficient thing you can do is use a lower gear than necessary for the power you are using. So, if you delay upshifting until 3000 rpm when, with a heavier foot you could get the same power at 2000 rpm, you are wasting fuel. So, for fuel efficiency, you should upshift at the lowest possible speed that will provide the power you need, but not at such a low speed that the that the engine protests.
In a vehicle with an automatic transmission What burns more gas, accelerating as fast as possible to 60 mph or accelerating slowly?
In simplistic physics terms, it makes no difference. You create the same amount of kinetic energy either way – and theoretically, that means you must burn the same amount of fuel.
For an internal combustion engine with gears it gets complicated.
A conventional car engine has a range of RPM’s at which the engine operates most efficiently. At lower or higher RPM’s gas consumption is worse.
So the trick is to keep the car in that band.
With a manual gearbox – the best approach is to push hard on the pedal to get the RPM’s into the efficient range – then accelerate more smoothly to the top of that range – then downshift.
If your car has enough gears, you can arrange to stay in the efficient range for all but the initial acceleration in 1st gear.
However, with an automatic (and especially automatics with not many gears in their gearbox) – you have no direct control over that – so it becomes a matter of tricking the gearbox into doing what you want. With modern gearboxes, you’d hope that the manufacturer set the shift points for efficiency – but it depends on the car. For a sports car they probably optimized the shift pattern for best 0–60 time – so they’d keep the engine in the “power zone” of RPM’s rather than in the “efficiency zone”…for a family sedan, the reverse would be the case. Many cars have a “sport” button which essentially lets you choose between keeping the engine in the power band or the efficiency band.
But even on the “economy” setting, the software won’t be able to prevent you from demanding performance that drives it out of the economy range.
It also varies depending on the air temperature – when the air is cold, it’s more dense and the fuel management software can burn fuel in larger quantities than on hot days – and that may influence the decision.
There are other considerations too. If you accelerate and brake gently then it takes longer to get you where you’re going. This means that the air conditioner, radio, lights, computer(s), etc are running for longer…and that takes energy too.
On the other hand – if you continually red-line the engine, it’ll wear out faster and a worn out engine uses more gas than a good engine.
Looking to save a few cents per litre on gas? Here are a few tips and tricks that can help you do just that:
1. Check gas prices before you fill up. Many gas stations offer discounts for cash, so it’s worth checking beforehand to see if there’s a station nearby that offers a cheaper price.
2. Use coupons. Many gas stations offer coupons that can be used to save money at the pump. Simply present the coupon when you’re paying and you’ll automatically get a discount.
3. Shop around for gas cards. Some gas cards offer discounts of up to 5 cents per litre, so it’s worth doing some research to see if you could be saving even more money.
4. Drive less. This one is obvious, but the less you drive, the less gas you’ll need to purchase. So, if you can carpool, take public transportation, or walk/bike instead of driving, you’ll save yourself some money in the long run.
5. Keep your car well-maintained. A well-tuned engine can improve your fuel economy by up to 4%, so it’s worth getting your car checked out by a mechanic every
Well, this may or not be cost efficient. It might actually be cheaper to buy new cars every 100,000 miles or so. But here we go.
Get a good vehicle. Modern pickup trucks and SUV’s are not good vehicles. Volvos are affordable and are well built. So are BMWs and Mercedes. Look at the van the American Pickers drive – it’s a Mercedes. I wouldn’t even rule out many American production cars.
Change your oil as frequently as it says in the owner’s manual. And don’t scrimp. You don’t have to get ultra expensive synthetics, but get something more than the bare minimum.
Do other automotive maintenance as frequently as it says in the owner’s manual. Car parts go bad. It’s not just tires either.
Drive carefully. Accelerate and decelerate smoothly. Drive at or near the speed limit. My sister was using our parent’s old ’96 Saturn until about two years ago when some idiot t-boned her by running a stop sign.
Speaking of Saturns, which were great in cold climates because they didn’t use a lot of metal, if you live anywhere they use road salt, keep the car as clean and rust-free as possible. Best to drive in Texas – Texas has a good climate for cars. They don’t know what road salt is in Texas.
Park it in a garage. This is optional if you live somewhere with good car weather. Like Texas.
What is the tech stack behind Google Search Engine?
Google Search is one of the most popular search engines on the web, handling over 3.5 billion searches per day. But what is the tech stack that powers Google Search?
The PageRank algorithm is at the heart of Google Search. This algorithm was developed by Google co-founders Larry Page and Sergey Brin and patented in 1998. It ranks web pages based on their quality and importance, taking into account things like incoming links from other websites. The PageRank algorithm has been constantly evolving over the years, and it continues to be a key part of Google Search today.
However, the PageRank algorithm is just one part of the story. The Google Search Engine also relies on a sophisticated infrastructure of servers and data centers spread around the world. This infrastructure enables Google to crawl and index billions of web pages quickly and efficiently. Additionally, Google has developed a number of proprietary technologies to further improve the quality of its search results. These include technologies like Spell Check, SafeSearch, and Knowledge Graph.
The technology stack that powers the Google Search Engine is immensely complex, and includes a number of sophisticated algorithms, technologies, and infrastructure components. At the heart of the system is the PageRank algorithm, which ranks pages based on a number of factors, including the number and quality of links to the page. The algorithm is constantly being refined and updated, in order to deliver more relevant and accurate results. In addition to the PageRank algorithm, Google also uses a number of other algorithms, including the Latent Semantic Indexing algorithm, which helps to index and retrieve documents based on their meaning. The search engine also makes use of a massive infrastructure, which includes hundreds of thousands of servers around the world. While google is the dominant player in the search engine market, there are a number of other well-established competitors, such as Microsoft’s Bing search engine and Duck Duck Go.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
https://bazel.build is an other open source framework which is heavily used all across Google including for Search.
Google has general information on you, the kinds of things you might like, the sites you frequent, etc. When it fetches search results, they get ranked, and this personal info is used to adjust the rankings, resulting in different search results for each user.
At a basic level, all search engines have something like an inverted index, so you can look up words and associated documents. There may also be a forward index.
One way of constructing such an index is by stemming words. Stemming is done with an algorithm than boils down words to their basic root. The most famous stemming algorithm is the Porter stemmer.
However, there are other approaches. One is to build n-grams, sequences of n letters, so that you can do partial matching. You often would choose multiple n’s, and thus have multiple indexes, since some n-letter combinations are common (e.g., “th”) for small n’s, but larger values of n undermine the intent.
don’t know that we can say “nothing absolute is known”. Look at misspellings. Google can resolve a lot of them. This isn’t surprising; we’ve had spellcheckers for at least 40 years. However, the less common a misspelling, the harder it is for Google to catch.
One cool thing about Google is that they have been studying and collecting data on searches for more than 20 years. I don’t mean that they have been studying searching or search engines (although they have been), but that they have been studying how people search. They process several billion search queries each day. They have developed models of what people really want, which often isn’t what they say they want. That’s why they track every click you make on search results… well, that and the fact that they want to build effective models for ad placement.
Each year, Google changes its search algorithm around 500–600 times. While most of these changes are minor, Google occasionally rolls out a “major” algorithmic update (such as Google Panda and Google Penguin) that affects search results in significant ways.
For search marketers, knowing the dates of these Google updates can help explain changes in rankings and organic website traffic and ultimately improve search engine optimization. Below, we’ve listed the major algorithmic changes that have had the biggest impact on search.
It took a starting page and added all the unique (if the word occurred more than once on the page, it was only counted once) words on the page to the index or incremented the index count if it was already in the index.
The page was indexed by the number of references the algorithm found to the specific page. So each time the system found a link to the page on a newly discovered page, the page count was incremented.
When you did a search, the system would identify all the pages with those words on it and show you the ones that had the most links to them.
As people searched and visited pages from the search results, Google would also track the pages that people would click to from the search page. Those that people clicked would also be identified as a better quality match for that set of search terms. If the person quickly came back to the search page and clicked another link, the match quality would be reduced.
Now, Google is using natural language processing, a method of trying to guess what the user really wants. From that it it finds similar words that might give a better set of results based on searches done by millions of other people like you. It might assume that you really meant this other word instead of the word you used in your search terms. It might just give you matches in the list with those other words as well as the words you provided.
It really all boils down to the fact that Google has been monitoring a lot of people doing searches for a very long time. It has a huge list of websites and search terms that have done the job for a lot of people.
There are a lot of proprietary algorithms, but the real magic is that they’ve been watching you and everyone else for a very long time.
What programming language powers Google’s search engine core?
C++, mostly. There are little bits in other languages, but the core of both the indexing system and the serving system is C++.
Originally Answered: Why “Google” is not shown as search result when one googles for “Search Engine”?
Our ranking algorithm simply doesn’t rank google.com highly for the query “search engine.” There is not a single, simple reason why this is the case. If I had to guess, I would say that people who type “search engine” into Google are usually looking for general information about search engines or about alternative search engines, and neither query is well-answered by listing google.com.
To be clear, we have never manually altered the search results for this (or any other) specific query.
The basic idea is using an inverted index. This means for each word keeping a list of documents on the web that contain it.
Responding to a query corresponds to retrieval of the matching documents (This is basically done by intersecting the lists for the corresponding query words), processing the documents (extracting quality signals corresponding to the doc, query pair), ranking the documents (using document quality signals like Page Rank and query signals and query/doc signals) then returning the top 10 documents.
Here are some tricks for doing the retrieval part efficiently: – distribute the whole thing over thousands and thousands of machines – do it in memory – caching – looking first at the query word with the shortest document list – keeping the documents in the list in reverse PageRank order so that we can stop early once we find enough good quality matches – keep lists for pairs of words that occur frequently together – shard by document id, this way the load is somewhat evenly distributed and the intersection is done in parallel – compress messages that are sent across the network etc
Jeff Dean in this great talk explains quite a few bits of the internal Google infrastructure. He mentions a few of the previous ideas in the talk.
He goes through the evolution of the Google Search Serving Design and through MapReduce while giving general advice about building large scale systems.
As for complexity, it’s pretty hard to analyze because of all the moving parts, but Jeff mentions that the the latency per query is about 0.2 s and that each query touches on average 1000 computers.
LaMDA is Google’s latest artificial intelligence (AI) chatbot. Blake Lemoine, a Google AI engineer, has claimed it is sentient. He’s been put on leave after publishing his conversations with LaMDA.
If Lemoine’s claims are true, it would be a milestone in the history of humankind and technological development.
Google strongly denies LaMDA has any sentient capacity.
Baidu is popular in China, Yandex is popular in Russia.
Yandex is great for reverse image searches, google just can’t compete with yandex in that category.
Normal Google reverse search is a joke (except for finding a bigger version of a pic, it’s good for that), but Google Lens can be as good or sometimes better at finding similar images or locations than Yandex depending on the image type. Always good to try both, and also Bing can be decent sometimes.
Bing has been profitable since 2015 even with less than 3% of the market share. So just imagine how much money Google is taking in.
Firstly: Yahoo, DuckDuckGo, Ecosia, etc. all use Bing to get their search results. Which means Bing’s usage is more than the 3% indicated.
Secondly: This graph shows overall market share (phones and PCs). But, search engines make most of their money on desktop searches due to more screen space for ads. And Bing’s market share on desktop is WAY bigger, its market share on phones is ~0%. It’s American desktop market share is 10-15%. That is where the money is.
What you are saying is in fact true though. We make trillions of web searches – which means even three percent market-share equals billions of hits and a ton of money.
I like duck duck go. And they have good privacy features. I just wish their maps were better because if I’m searching a local restaurant nothing is easier than google to transition from the search to the map to the webpage for the company. But for informative searches I think it gives a more objective, less curated return.
Use Ecosia and profits go to reforestation efforts!
Turns out people don’t care about their privacy, especially if it gets them results.
I recently switched to using brave browser and duck duck go and I basically can’t tell the difference in using Google and chrome.
The only times I’ve needed to use Google are for really specific searches where duck duck go doesn’t always seem to give the expected results. But for daily browsing it’s absolutely fine and far far better for privacy.
Does Google Search have the most complex functionality hiding behind a simple looking UI?
There is a lot that happens between the moment a user types something in the input field and when they get their results.
Google Search has a high-level overview, but the gist of it is that there are dozens of sub systems involved and they all work extremely fast. The general idea is that search is going to process the query, try to understand what the user wants to know/accomplish, rank these possibilities, prepare a results page that reflects this and render it on the user’s device.
I would not qualify the UI of simple. Yes, the initial state looks like a single input field on an otherwise empty page. But there is already a lot going on in that input field and how it’s presented to the user. And then, as soon as the user interacts with the field, for instance as they start typing, there’s a ton of other things that happen – Search is able to pre-populate suggested queries really fast. Plus there’s a whole “syntax” to search with operators and what not, there’s many different modes (image, news, etc…).
One recent iteration of Google search is Google Lens: Google Lens interface is even simpler than the single input field: just take a picture with your phone! But under the hood a lot is going on. Source.
Conclusion:
The Google search engine is a remarkable feat of engineering, and its capabilities are only made possible by the use of cutting-edge technology. At the heart of the Google search engine is the PageRank algorithm, which is used to rank web pages in order of importance. This algorithm takes into account a variety of factors, including the number and quality of links to a given page. In order to effectively crawl and index the billions of web pages on the internet, Google has developed a sophisticated infrastructure that includes tens of thousands of servers located around the world. This infrastructure enables Google to rapidly process search queries and deliver relevant results to users in a matter of seconds. While Google is the dominant player in the search engine market, there are a number of other search engines that compete for users, including Bing and Duck Duck Go. However, none of these competitors have been able to replicate the success of Google, due in large part to the company’s unrivaled technological capabilities.
https://preview.redd.it/d5jh3fzwudvc1.png?width=2240&format=png&auto=webp&s=bb66324f91309d1ddaba3f02547f3b61c8aa7c7a The Android company is now giving all Chromebook owners access to its AI-powered editing tools which are exclusively available to Pixel users or members with a monthly subscription; Google’s free to all Chromebooks effective date is May 15. Competitiveness entails features, such as Magical Eraser, Photo Unblur, Portrait Light, and the ultimate editing tool -Magic Editor. Users can access these tools only if they are IOS 15 for iPhones, advanced versions of Android OS 8 for Android mobiles, and ChromeOS 118+ for Chromebooks with at least 4GB RAM In addition, you will see features like the magic eraser that were previously available to subscribers as they're not in any of the free versions. To address this problem, Google has incorporated an app known as Magic Editor into the Pixel 8 series that enables sophisticated photo manipulation. Non-Pixel users can take advantage of the free Magic Editor but with the limit of 10 uses as an option while they subscribe to a monthly service. The subscription grants unlimited access to Google sites. This last step, however, shows a desire on the part of Google to provide ample editing features for the general people. submitted by /u/MediaPractNews [link] [comments]
I used Stack and found the concept very appealing. However, it ultimately fell short due to a key limitation. Documents scanned within Stack don't integrate with your existing Google Drive folder structure. Here's the problem: Isolated Folders: Folders you create in Stack aren't accessible in Google Drive. This means you can't leverage your existing organization system for scanned documents. You'd need to create a separate folder structure within Stack itself. Missing Subfolder Information: Scanned documents are saved to a single "Stack" folder in Drive, with no indication of their original subfolders within the app. This forces you to manually sort them later. Overall, Stack has potential, but its lack of integration with Google Drive's folder system makes it cumbersome for long-term document management. submitted by /u/Professional_Tap5910 [link] [comments]
On Tuesday night, Google ordered the arrest of nine workers in Sunnyvale and New York, who were told they would be locked out of their accounts and offices and were not expected to return to work until contacted by HR, according to a statement from the No Tech for Apartheid campaign. submitted by /u/TeaJurno [link] [comments]
Introduction:
In today’s digital landscape, a strong online presence is essential for businesses to stay competitive. With millions of…Continue reading on Medium »
I can’t find a single solitary answer to even the easiest questions anymore (unless it’s about the entertainment industry) What used to be an answer to ANYTHING in seconds, now takes a half hour of searching and weeding through sites not even remotely close to an answer. The results EVEN LIE TO US! The algorithm knows EXACTLY what we’re asking, taunting us with result titles that seem like the answer is only a few..more…words…..away……… Right. Another gotcha moment. I remember just 15 years ago wanting to fix my dryer and within seconds I had somebody’s blog telling me step by step how to remove the front, how to lift out the tumbling drum, and replace the fan belt, AND I DID IT! With zero knowledge of anything mechanical. It’s so sad….but maybe in the end it’s a good thing. Probably the healthiest mental exercise is to use my feet and my hands, to go back to the library, and personally find my answers. Not to mention making sure librarians continue to have the luxury of a job. submitted by /u/TheBestICanRtNow [link] [comments]
HV Infotech is widely recognised as one of the best SEO companies in India, delivering top-notch services to enhance online visibility and…Continue reading on Medium »
In today’s digital age, the speed at which information is retrieved and delivered plays a crucial role in shaping user experiences. At the…Continue reading on Medium »
Received this message from Google One about changes to my subscription (located in US). If they're "expanding access" of some features to all users AND phasing out other benefits all that's left is cloud storage and a 3% Google store discount which is a joke. They need to adjust these prices if they're going to just offer bare bones features. Between this and Google Podcast app being shut down, I'm officially over it. submitted by /u/tuff_ole_broad [link] [comments]
Google search engine result pages (SERPs) give away lots of clues you can utilize to reverse-engineer rankings and figure out why one page…Continue reading on Medium »
IDK how this happened but chrome and Google Play store got off entered in the grid and honestly I'd kinda want to see it as a feature submitted by /u/thatguy_bridger [link] [comments]
Google is known for its constant updates and algorithm changes, and this time it’s no different. The tech giant has recently announced a…Continue reading on Medium »
It says, "Based on your answers, our support specialists won't be able to help you fix this problem. Please try one of the solutions given previously.". submitted by /u/FurryMaster15 [link] [comments]
If you’ve collaborated with a digital marketer or refined your SEO strategy, you’ve likely stumbled upon terms like ‘search intent,’…Continue reading on Medium »
Have a question you need answered? A new Google product you want to talk about? Ask away here! Recently, we at /r/Google have noticed a large number of support questions being asked. For a long time, we’ve removed these posts and directed the users to other subreddits, like /r/techsupport. However, we feel that users should be able to ask their Google-related questions here. These monthly threads serve as a hub for all of the support you need, as well as discussion about any Google products. Please note! Top level comments must be related to the topics discussed above. Any comments made off-topic will be removed at the discretion of the Moderator team. Discord Server We have made a Discord Server for more in-depth discussions relating to Google and for quicker response to tech support questions. submitted by /u/AutoModerator [link] [comments]
Programming, Coding and Algorithms Questions and Answers.
Coding is a complex process that requires precision and attention to detail. While there are many resources available to help learn programming, it is important to avoid making some common mistakes. One mistake is assuming that programming is easy and does not require any prior knowledge or experience. This can lead to frustration and discouragement when coding errors occur. Another mistake is trying to learn too much at once. Coding is a vast field with many different languages and concepts. It is important to focus on one area at a time and slowly build up skills. Finally, another mistake is not practicing regularly. Coding is like any other skill- it takes practice and repetition to improve. By avoiding these mistakes, students will be well on their way to becoming proficient programmers.
In addition to avoiding these mistakes, there are certain things that every programmer should do in order to be successful. One of the most important things is to read coding books. Coding books provide a comprehensive overview of different languages and concepts, and they can be an invaluable resource when starting out. Another important thing for programmers to do is never stop learning. Coding is an ever-changing field, and it is important to keep up with new trends and technologies.
Coding is a process of transforming computer instructions into a form a computer can understand. Programs are written in a particular language which provides a structure for the programmer and uses specific instructions to control the sequence of operations that the computer carries out. The programming code is written in and read from a text editor, which in turn is used to produce a software program, application, script, or system.
When you’re starting to learn programming, it’s important to have the right tools and resources at your disposal. Coding can be difficult, but with the proper guidance it can also be rewarding.
This blog is an aggregate of clever questions and answers about Programming, Coding, and Algorithms. This is a safe place for programmers who are interested in optimizing their code, learning to code for the first time, or just want to be surrounded by the coding environment.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
2: Spend a lot of time solving an issue yourself, before you google it. Just about every issue you can stumble upon, is in 99.99% cases already has been solved by someone else. Learn to properly search for solutions first.
3: Spending a couple of days on a task and realizing it was not worth it. If the time you spend on a single problem is more than halve an hour then you probably doing it wrong, search for alternatives.
4: Writing code from a scratch. Do not reinvent a bicycle, if you need to write a blog, just search a demo application in a language and a framework you chose, and build your logic on top of it. Need some other feature? Search another demo incorporating this feature, and use its code.
Congratulations, you have implicitly defined an interface and a function that requires its parameter to fulfil that interface (implicitly).
How do you know any of this? Oh, no problem, just try using the function, and if it fails during runtime with complaints about your bar missing a foo method, you will know what you did wrong. By Paulina Jonušaitė
List of Freely available programming books – What is the single most influential book every Programmers should read
What is the best and easy programming language to learn in 2022?
Best != easy and easy != best. Interpreted BASIC is easy, but not great for programming anything more complex than tic-tac-toe. C++, C#, and Java are very widely used, but none of them are what I would call easy.
Is Python an exception? It’s a fine scripting language if performance isn’t too critical. It’s a fine wrapper language for libraries coded in something performant like C++. Python’s basics are pretty easy, but it is not easy to write large or performant programs in Python.
Like most things, there is no shortcut to mastery. You have to accept that if you want to do anything interesting in programming, you’re going to have to master a serious, not-easy programming language. Maybe two or three. Source.
Why do modern compilers even require us to declare data types? Can’t it figure out what we are doing and put that stuff in for us? Like how JavaScript does.
Type declarations mainly aren’t for the compiler — indeed, types can be inferred and/or dynamic so you don’t have to specify them.
They’re there for you. They help make code readable. They’re a form of active, compiler-verified documentation.
For example, look at this method/function/procedure declaration:
locate(tr, s) { … }
What type is tr?
What type is s?
What type, if any, does it return?
Does it always accept and return the same types, or can they change depending on values of tr, s, or system state?
If you’re working on a small project — which most JavaScript projects are — that’s not a problem. You can look at the code and figure it out, or establish some discipline to maintain documentation.
If you’re working on a big project, with dozens of subprojects and developers and hundreds of thousands of lines of code, it’s a big problem. Documentation discipline will get forgotten, missed, inconsistent or ignored, and before long the code will be unreadable and simple changes will take enormous, frustrating effort.
But if the compiler obligates some or all type declarations, then you say this:
Now you know immediately what type it returns and the types of the parameters, you know they can’t change (except perhaps to substitutable subtypes); you can’t forget, miss, ignore or be inconsistent with them; and the compiler will guarantee you’ve got the right types.
That makes programming — particularly in big projects — much easier. Source: Dave Voorhis
What is a programming language that you hope never to work in again, and why?
COBOL. Verbose like no other, excess structure, unproductive, obtuse, limited, rigid.
JavaScript. Insane semantics, weak typing, silent failure. Thankfully, one can use transpilers for more rationally designed languages to target it (TypeScript, ReScript, js_of_ocaml, PureScript, Elm.)
ActionScript. Macromedia Flash’s take on ECMA 262 (i.e., ~JavaScript) back in the day. It’s static typing was gradual so the compiler wasn’t big on type error-catching. This one’s thankfully deader than Disco.
BASIC. Mandatory line numbering. Zero standardization. Not even a structured language — you’ve never seen that much spaghetti code.
In the real of dynamically typed languages, anything that is not in the Lisp family. To me, Lisps just are a more elegant and richer-featured than the rest. Alexander feterman
Why does game programming fit so well with Object Oriented Programming paradigm?
Object-oriented programming is “a programming model that organizes software design around data, or objects, rather than functions and logic.”
Most games are made of “objects” like enemies, weapons, power-ups etc. Most games map very well to this paradigm. All the objects are in charge of maintaining their own state, stats and other data. This makes it incredibly easier for a programmer to develop and extend video games based on this paradigm.
I could go on, but I’d need an easel and charts. Chrish Nash
What are the concepts every Java programmer must know?
Ok…I think this is one of the most important questions to answer. According to the my personal experience as a Programmer, I would say you must learn following 5 universal core concepts of programming to become a successful Java programmer.
(1) Mastering the fundamentals of Java programming Language – This is the most important skill that you must learn to become successful java programmer. You must master the fundamentals of the language, specially the areas like OOP, Collections, Generics, Concurrency, I/O, Stings, Exception handling, Inner Classes and JVM architecture.
(2) Data Structures and Algorithms – Programming languages are basically just a tool to solve problems. Problems generally has data to process on to make some decisions and we have to build a procedure to solve that specific problem domain. In any real life complexity of the problem domain and the data we have to handle would be very large. That’s why it is essential to knowing basic data structures like Arrays, Linked Lists, Stacks, Queues, Trees, Heap, Dictionaries ,Hash Tables and Graphs and also basic algorithms like Searching, Sorting, Hashing, Graph algorithms, Greedy algorithms and Dynamic Programming.
(3) Design Patterns – Design patterns are general reusable solution to a commonly occurring problem within a given context in software design and they are absolutely crucial as hard core Java Programmer. If you don’t use design patterns you will write much more code, it will be buggy and hard to understand and refactor, not to mention untestable and they are really great way for communicating your intent very quickly with other programmers.
(4) Programming Best Practices – Programming is not only about learning and writing code. Code readability is a universal subject in the world of computer programming. It helps standardize products and help reduce future maintenance cost. Best practices helps you, as a programmer to think differently and improves problem solving attitude within you. A simple program can be written in many ways if given to multiple developers. Thus the need to best practices come into picture and every programmer must aware about these things.
(5) Testing and Debugging (T&D) – As you know about the writing the code for specific problem domain, you have to learn how to test that code snippet and debug it when it is needed. Some programmers skip their unit testing or other testing methodology part and leave it to QA guys. That will lead to delivering 80% bugs hiding in your code to the QA team and reduce the productivity and risking and pushing your project boundaries to failure. When a miss behavior or bug occurred within your code when the testing phase. It is essential to know about the debugging techniques to identify that bug and its root cause.
I hope these instructions will help you to become a successful Java Programmer. Here i am explain only the universal core concepts that you must learn as successful programmer. I am not mentioning any technologies that Java programmer must know such as Spring, Hibernate, Micro-Servicers and Build tools, because that can be change according to the problem domain or environment that you are currently working on…..Happy Coding!
Why is it recommended to learn algorithms as a software developer if most developers say that knowing algorithms doesn’t help much?
Hard to be balanced on this one.
They are useful to know. If ever you need to use, or make a derivative of algorithm X, then you’ll be glad you took the time.
If you learn them, you’ll learn general techniques: sorting, trees, iteration, transformation, recursion. All good stuff.
You’ll get a feeling for the kinds of code you cannot write if you need certain speeds or memory use, given a certain data set.
You’ll pass certain kinds of interview test.
You’ll also possibly never use them. Or use them very infrequently.
If you mention that on here, some will say you are a lesser developer. They will insist that the line between good and not good developers is algorithm knowledge.
That’s a shame, really.
In commercial work, you never start a day thinking ‘I will use algorithm X today’.
The work demands the solution. Not the other way around.
This is yet another proof that a lot of technical sounding stuff is actual all about people. Their investment in something. Need for validation. Preference.
The more you know in development, the better. But I would not prioritize algorithms right at the top, based on my experience. Alan Mellor
What are the disadvantages of using C++ to make a programming language rather than C, and are there any at all?
So you’re inventing a new programming language and considering whether to write either a compiler or an interpreter for your new language in C or C++?
The only significant disadvantage of C++ is that in the hands of bad programmers, they can create significantly more chaos in C++ than they can in C.
But for experienced C++ programmers, the language is immensely more powerful than C and writing clear, understandable code in C++ can be a LOT easier.
INCIDENTALLY:
If you’re going to actually do this – then I strongly recommend looking at a pair of tools called “flex” and “bison” (which are OpenSourced versions of the more ancient “lex” and “yacc”). These tools are “compiler-compilers” that are given a high level description of the syntax of your language – and automatically generate C code (which you can access from C++ without problems) to do the painful part of generating a lexical analyzer and a syntax parser. Steve Baker
How do you make something private but accessible within a class in C++?
Did you know you can google this answer yourself? Search for “c++ private keyword” and follow the link to access specifiers, which goes into great detail and has lots of examples. In case google is down, here’s a brief explanation of access specifiers:
The private access specifier in a class or struct definition makes declarations that occur after the specifier. A private declaration is visible only inside the class/struct, and not in derived classes or structs, and not from outside.
The protected access specifier makes declarations visible in the current class/struct and also in derived classes and structs, but not visible from outside. protected is not used very often and some wise people consider it a code smell.
The public access specifier makes declarations visible everywhere.
You can also use access specifiers to control all the items in a base class. By Kurt Guntheroth
What are the shortcomings of the Rust Programming language?
Rust programmers do mention the obvious shortcomings of the language.
Such as that a lot of data structures can’t be written without unsafe due to pointer complications.
Or that they haven’t agreed what it means to call unsafe code (although this is somewhat of a solved problem, just like calling into assembler from C0 in the sysbook).
The main problem of the language is that it doesn’t absolve the programmers from doing good engineering.
It just catches a lot of the human errors that can happen despite such engineering. Jonas Oberhauser.
Will Rust beat C++ in performance and the speed of execution?
Comparing cross-language performance of real applications is tricky. We usually don’t have the resources for writing said applications twice. We usually don’t have the same expertise in multiple languages. Etc. So, instead, we resort to smaller benchmarks. Occasionally, we’re able to rewrite a smallish critical component in the other language to compare real-world performance, and that gives a pretty good insight. Compiler writers often also have good insights into the optimization challenges for the language they work on.
My best guess is that C++ will continue to have a small edge in optimizability over Rust in the long term. That’s because Rust aims at a level of memory safety that constrains some of its optimizations, whereas C++ is not bound to such considerations. So I expect that very carefully written C++ might be slightly faster than equivalent very carefully written Rust.
However, that’s perhaps not a useful observation. Tiny differences in performance often don’t matter: The overall programming model is of greater importance. Since both languages are pretty close in terms of achievable performance, it’s going to be interesting watching which is preferable for real-life engineering purposes: The safe-but-tightly-constrained model of Rust or the more-risky-but-flexible model of C++. By David VandeVoorde
Why do a lot of programmers shy away from learning lisp?
Lisp does not expose the underlying architecture of the processor, so it can’t replace my use of C and assembly.
Lisp does not have significant statistical or visualization capabilities, so it can’t replace my use of R.
Lisp was not built with unix filesystems in mind, so it’s not a great choice to replace my use of bash.
Lisp has nothing at all to do with mathematical typesetting, so won’t be replacing LATEXLATEX anytime soon.
And since I use vim, I don’t even have the excuse of learning lisp so as to modify emacs while it’s running.
In fewer words: for the tasks I get paid to do, lisp doesn’t perform better than the languages I currently use. By Barry RoundTree
What are some things that only someone who has been programming 20-50 years would know?
The truth of the matter gained through the multiple decades of (my) practice (at various companies) is ugly, not convenient and is not what you want to hear.
The technical job interviews are non indicative and non predictive waste of time, that is, to put it bluntly, garbage (a Navy Seal can be as brave is (s)he wants to be during the training, but only when the said Seal meets the bad guys face to face on the front line does her/his true mettle can be revealed).
An average project in an average company, both averaged the globe over, is staffed with mostly random, technically inadequate, people who should not be doing what they are doing.
Such random people have no proper training in mathematics and computer science.
As a result, all the code generated by these folks out there is flimsy, low quality, hugely not efficient, non scalable, non maintainable, hardly readable steaming pile of spaghetti mess – the absence of structure, order, discipline and understanding in one’s mind is reflected at the keyboard time 100 percent.
It is a major hail mary, a hallelujah and a standing ovation to the genius of Alan Turing for being able to create a (Turing) Machine that, on the one hand, can take this infinite abuse and, on the other hand, being nothing short of a miracle, still produce binaries that just work. Or so they say.
There is one and only one definition of a computer programmer: that of a person who combines all of the following skills and abilities:
the ability to write a few lines of properly functioning (C) code in the matter of minutes
the ability to write a few hundred lines of properly functioning (C) code in the matter of a small number of hours
the ability to write a few thousand lines of properly functioning (C) code in the matter of a small number of weeks
the ability to write a small number of tens of thousands of lines of properly functioning (C) code in the matter of several months
the ability to write several hundred thousand lines of properly functioning (C) code in the matter of a small number of years
the ability to translate a given set of requirements into source code that is partitioned into a (large) collection of (small and sharp) libraries and executables that work well together and that can withstand a steady-state non stop usage for at least 50 years
It is this ability to sustain the above multi-year effort during which the intellectual cohesion of the output remains consistent and invariant is what separates the random amateurs, of which there is a majority, from the professionals, of which there is a minority in the industry.
There is one and only one definition of the above properly functioning code: that of a code that has a check mark in each and every cell of the following matrix:
the code is algorithmically correct
the code is easy to read, comprehend, follow and predict
the code is easy to debug
the intellectual effort to debug code, symbolized as E(d)E(d), is strictly larger than the intellectual effort to write code, symbolized as E(w)E(w). That is: E(d)>E(w)E(d)>E(w). Thus, it is entirely possible to write a unit of code that even you, the author, can not debug
the code is easy to test
in different environments
the code is efficient
meaning that it scales well performance-wise when the size of the input grows without bound in both configuration and data
the code is easy to maintain
the addition of new and the removal or the modification of the existing features should not take five metric tons of blood, three years and a small army of people to implement and regression test
the certainty of and the confidence in the proper behavior of the system thus modified should by high
(read more about the technical aspects of code modification in the small body of my work titled “Practical Design Patterns in C” featured in my profile)
(my claim: writing proper code in general is an optimization exercise from the theory of graphs)
the code is easy to upgrade in production
lifting the Empire State Building in its entirety 10 feet in the thin blue air and sliding a bunch of two-by-fours underneath it temporarily, all the while keeping all of its electrical wires and the gas pipes intact, allowing the dwellers to go in and out of the building and operating its elevators, should all be possible
changing the engine and the tires on an 18-wheeler truck hauling down a highway at 80 miles per hour should be possible
A project staffed with nothing but technically capable people can still fail – the team cohesion and the psychological compatibility of team members is king. This is raw and unbridled physics – a team, or a whole, is more than the sum of its members, or parts.
All software project deadlines without exception are random and meaningless guesses that have no connection to reality.
Intelligence does not scale – a million fools chained to a million keyboards will never amount to one proverbial Einstein. Source
Is there a way to initialize an object without a constructor? Can you still create objects?
At a technical syntax level, this depends on the language. Many modern languages either create a default constructor, or will automatically initialize object fields to default values. There are other ways to initialize fields in some languages – maybe reflection, maybe a static method, maybe relaxed access control. Maybe (ugh, I feel sick) a whole bunch of setters.
But at the human level, why? Why engage in something unclear to the next programmer?
One nice thing about a constructor is that it tells me that you thought about how your object should be created. You considered what was needed to make it safe to use. By Alan Mellor
Is it bad if I write a function that only gets called once?
A function pulls a computation out of your program and puts it in a conceptual box labeled by the function’s name. This lets you use the function name in a computation instead of writing out the computation done by the function.
Writing a function is like defining an obscure word before you use it in prose. It puts the definition in one place and marks it out saying, “This is the definition of xxx”, and then you can use the one word in the text instead of writing out the definition.
Even if you only use a word once in prose, it’s a good idea to write out the definition if you think that makes the prose clearer.
Even if you only use a function once, it’s a good idea to write out the function definition if you think it will make the code clearer to use a function name instead of a big block of code. Source.
Can conditional statements be effectively removed by the use of polymorphism when using object-oriented programming?
Conditional statements of the form if this instance is type T then do X can generally — and usually should — be removed by appropriate use of polymorphism.
All conditional statements might conceivably be replaced in that fashion, but the added complexity would almost certainly negate its value. It’s best reserved for where the relevant types already exist.
Creating new types solely to avoid conditionals sometimes makes sense (e.g. maybe create distinct nullable vs not-nullable types to avoid if-null/if-not-null checks) but usually doesn’t. Source.
Can you explain exception handling in Java so clearly that I’ll never get it wrong ever again?
Something bad happens as your Java code runs.
Throw an exception.
The following lines after the throw do not run, saving them from the bad thing.
control is handed back up the call stack until Java runtime finds a catch() statement that matches the exception.
The code resumes running from there. Source: Allan Mellor
Why is the YouTube algorithm so much better at finding similar music compared to Spotify and other music providers?
Google has better programmers, and they’ve been working on the problem space longer than either Spotify or the other providers have existed.
YouTube has a year and a half on Spotify, for example, and they’ve been employing a lot of “organ bank” engineers from Google proper, for various problems — like the “similar to this one“ problem — and the engineers doing the work are working on much larger teams, overall.
Spotify is resource starved, because they really aren’t raking in the same ratio of money that YouTube does. By Terry Lambert
Is coding Java in Notepad++ and compiling with command prompt good for learning Java?
Over the past two decades, Java has moved from a fairly simple ecosystem, with the relatively straightforward ANT build tool, to a sophisticated ecosystem with Maven or gradle basically required. As a result, this kind of approach doesn’t really work well anymore. I highly recommend that you download the community edition of IntelliJ IDEA; this is a free version of a great commercial IDE. By Joshua Gross
How do you handle a JSON response in Java?
Best bet is to turn it into a record type as a pure data structure. Then you can start to work on that data. You might do that direct, or use it to construct some OOP objects with application specific behaviours on them. Up to you.
You can decide how far to take layering as well. Small apps work ok with the data struct in the exact same format as the JSON data passed around. But you might want to isolate that and use a mapping to some central domain model. Then if the JSON schema changes, your domain model won’t.
Libraries such as Jackson and Gson can handle the conversion. Many frameworks have something like it built in, so you get delivered a pure data struct ‘object’ containing all the data that was in the JSON
Things like JSON Validator and JSV Schemas can help you validate the response JSON if need be. By Alan Mellor
What is the tech stack behind Slack?
Keith Adams already gave an excellent overview of Slack’s technology stack so I will do my best to add to his answer.
Products that make up Slack’s tech stack include: Amazon (CloudFront, CloudSearch, EMR, Route 53, Web Services), Android Studio, Apache (HTTP Server, Kafka, Solr, Spark, Web Server), Babel, Brandfolder, Bugsnag, Burp Suite, Casper Suite, Chef, DigiCert, Electron, Fastly, Git, HackerOne, JavaScript, Jenkins, MySQL, Node.js, Objective-C, OneLogin, PagerDuty, PHP, Redis, Smarty, Socket, Xcode, and Zeplin.
Additionally, here’s a list of other software products that Slack is using internally:
Marketing: AdRoll, Convertro, MailChimp, SendGrid
Sales and Support: Cnflx, Front, Typeform, Zendesk
Analytics: Google Analytics, Mixpanel, Optimizely, Presto
Slack is used by 55% of Unicorns (and 59% of B2B Unicorns)
Slack has 85% market share in Siftery’s Instant Messaging category on Siftery
Slack is used by 42% of both Y Combinator and 500 Startups companies
35% of companies in the Sharing Economy use Slack
(Disclaimer: The above data was pulled from Siftery and has been verified by individuals working at Slack) By Gerry Giacoman Colyer
When should programmers use recursion?
Programmers should use recursion when it is the cleanest way to define a process. Then, WHEN AND IF IT MATTERS, they should refine the recursion and transform it into a tail recursion or a loop. When it doesn’t matter, leave it alone. Jamie Lawson
Why is multithreading so underused?
Mostly because:
Multithreading is not applicable for most problems (see reason #3).
For a substantial subset of the problems that multithreading is applicable for, the rewards for using it are not significant enough to be worth the extra development effort.
For a subset of the remaining use cases, using multithreading requires rethinking how you solve the problem, in order to break it up into separate chunks that can be processed by different threads and the results then recombined.
Besides extra development effort in this sense, this also adds extra overhead to the solution, overhead which may outweigh the benefits of using multithreading.
Add to all of the above, multithreading gives the programmer a lot of rope with which they can easily hang themselves, so they tend to approach it with caution. Or they don’t, and end up hanging themselves.
Finally there is a small but important set of problems — including, for example, machine learning and big data — for which multithreading could be useful but is probably superseded by multiprocessing and cloud architectures.
This requires the same sort of redesign work that I mentioned in #3 above, but it happens at a higher logical and system level than multithreading. Instead of multiple threads, running inside the same process and talking to each other, you end up with multiple processes, quite likely running on different server instances (docker usually, sometimes virtual servers), possibly on different server hardware, talking to each other via network.
Multithreading is generally useful for two sorts of problems:
Problems that are easily chunked up and farmed out to multiple threads or processes, then the results returned and combined. Also called “highly parallelizable”.
Of course a lot of 3D rendering is highly parallelizable… and almost all computers have specialized GPU hardware for doing that much faster than any CPU can.
Problems of which some part is strongly I/O bound, the most common example of which is user interfaces, which spend most of their time waiting on human reaction speeds.
And in fact, multithreading is used a lot in user interfaces, and web servers, which have to contend with the same issue. By Stevens J. Owens
When is it (if ever) a good idea to use JavaScript instead of TypeScript?
TypeScript is helpful when you have a large codebase which is going to be updated many times by many collaborators. When you are not in that use case, the advantages of TypeScript are much less obvious, besides it is possible to be too orthodox with TypeScript and prevent behaviors which are acceptable. It’s also possible (easy, even) to feel that your TypeScript implementation prevents behaviors which it actually allows. So let’s all agree that TypeScript is no silver bullet.
So TypeScript doesn’t always makes things better, but sometimes it makes them worse. There are situations when transpiring TS to JS is just not an option. Also, transpiring with types will always make the resulting JS file larger, when sometimes you have to specifically optimize to have the smallest code file. Jerome Cukier
Is Node.js better than Golang in the perspective of development speed (e.g., you write less code)?
If you use JavaScript source code with Node then yes! You will probably write shorter lines.
Go has pesky things like type information in it. It has interfaces and error returns, all needless clutter that just gets in the way of that programmer brain dump. You even have to type := instead of = to assign variables!
It’s almost like the makers of Go just wanted you to type more stuff in for the same program. Maybe they had reasons, eh, who knows? They’d probably say there were benefits to doing so.
But yes your keystrokes will be fewer with JavaScript. By Alan Mellor
Why is the C programming language not used for smartphones and other hardware devices instead of Java?
Your phone runs a version of Linux, which is programmed in C. Only the top layer is programmed in java, because performance usually isn’t very important in that layer.
Your web browser is programmed in C++ or Rust. There is no java anywhere. Java wasn’t secure enough for browser code (but somehow C++ was? Go figure.)
Your Windows PC is programmed mostly in C++. Windows is very old code, that is partially C. There was an attempt to recode the top layer in C#, but performance was not good enough, and it all had to be recoded in C++. Linux PCs are coded in C.
Your intuition that most things are programmed in java is mistaken. Kurt Guntheroth
How do you declare an array globally in Java?
That’s not possible in Java, or at least the language steers you away from attempting that.
Global variables have significant disadvantages in terms of maintainability, so the language itself has no way of making something truly global.
The nearest approach would be to abuse some language features like so:
public class Globals {
public static int[] stuff = new int [10];
}
Then you can use this anywhere with
Globals.stuff[0] = 42;
Java isn’t Python, C nor JavaScript. It’s reasonably opinionated about using Object Oriented Programming, which the above snippets are not examples of.
This also uses a raw array, which is a fixed size in Java. Again, not very useful, we prefer ArrayList for most purposes, which can grow.
I’d recommend the above approach if and only if you have no alternatives, are not really wanting to learn Java and just need a dirty utility hack, or are starting out in programming just finding your feet. Alan Mellor
In which situations is NoSQL better than relational databases such as SQL? What are specific examples of apps where switching to NoSQL yielded considerable advantages?
Warning: The below answer is a bit oversimplified, for pedagogical purposes. Picking a storage solution for your application is a very complex issue, and every case will be different – this is only meant to give an overview of the main reason why people go NoSQL.
There are several possible reasons that companies go NoSQL, but the most common scenario is probably when one database server is no longer enough to handle your load. noSQL solutions are much more suited to distribute load over shitloads of database servers.
This is because relational databases traditionally deal with load balancing by replication. That means that you have multiple slave databases that watches a master database for changes and replicate them to themselves. Reads are made from the slaves, and writes are made to the master. This works to a certain level, but it has the annoying side-effect that the slaves will always lag slightly behind, so there is a delay between the time of writing and the time that the object is available for reading, which is complex and error-prone to handle in your application. Also, the single master eventually becomes a bottleneck no matter how powerful it is. Plus, it’s a single point of failure.
NoSQL generally deals with this problem by sharding. Overly simplified it means that users with userid 1-1000000 is on server A, and users with userid 1000001-2000000 is on server B and so on. This solves the problems that relational replication has, but the drawback is that features such as aggregate queries (SUM, AVG etc) and traditional transactions are sacrificed.
Chrome is coded in C++, assembler and Python. How could three different languages be used to obtain only one product? What is the method used to merge programming languages to create software?
Concretely, a processor can correctly receive only one kind of instruction, the assembler. This may also depend on the type of processor.
As the assembler requires several operations just to make a simple addition, we had to create compilers which, starting from a higher level language (easier to write), are able to automatically generate the assembly code.
These compilers can sometimes receive several languages. For example the GCC compiler allows to compile C and C++, and it also supports to receive pieces of assembler inside, defined by a keyword __asm__ . The assembler is still something to avoid absolutely because it is completely dependent on the machine and can therefore be a source of interference and unpleasant surprises.
More generally, we also often create multi-language applications using several components (libraries, or DLLs, activeX, etc.) The interfaces between these components are managed by the operating systems and allow Java to coexist happily. , C, C++, C#, Python, and everything you could wish for. A certain finesse is however necessary in the transitions between languages because each one has its implicit rules which must therefore be enforced very explicitly.
For example, an object coming from the C++ world, transferred by these interfaces in a Java program will have to be explicitly destroyed, the java garbage collector only supports its own objects.
Another practical interface is web services, each module, whatever its technology, can communicate with the others by sending itself serialized objects in json… which is much less a source of errors! Source: Vincent Steyer
What is the most dangerous code you have ever seen?
This line removes the filesystem (starting from root /)
(a chance in 6 of falling on the first part described above, otherwise “click” is displayed)
How difficult is LeetCode, How is it used in a practical way?
Practically, it is used for two purposes:
Practicing coding-in-the-small, like a daily crossword puzzle for programmers
Pre-screens for certain interview processes
Certain interview processes ask LeetCode style questions as a technical test. Not all do. Possibly not even most. Source
Which type of software developer should learn first, C, Python, or JavaScript?
If you plan to be a professional general software engineer:
C, then Python, then JavaScript.
If you plan to be a professional Web developer:
JavaScript, then Python, then C.
If you want to learn application programming as a hobby:
Python, then JavaScript, then C.
If you want to learn embedded systems programming as a hobby:
C, then Python. Skip JavaScript.
In general, learning C first will give you a great grounding in computing and computational machinery, whilst giving you useful programming skills. It’s not the easiest journey, but if you know C well, everything else becomes easier. Source.
Are HTML and CSS still relevant in 2022?
Relevant?
They’re unavoidable if you’re a Web frontend developer and not using a frontend framework that autogenerates HTML and CSS.
If you’re a backend developer or working entirely outside of Web development (there’s actually a lot of that) then HTML and CSS are, for you, completely irrelevant. Dave Voorhis
What is a disadvantage of JavaScript?
Richard Kenneth Eng covered most of the major issues with JavaScript itself, so I won’t repeat. Instead of focusing on the weirdness inherent to the language, I want to focus on JavaScript in the ideal, and what disadvantages may lie therein. When I say in the ideal, what I mean is what disadvantages exist if we assume perfect application of the language without concern for the quirks, because even there, problems exist.
For me, the single biggest disadvantage to JavaScript is that best practices can change rapidly and without notice. This is because all JavaScript is running in an engine, be it Blink in Chrome and Node, SpiderMonkey in Firefox, Chakra in Edge, or Webkit in Safari.
Since competition among browsers is so fierce, JavaScript performance is of the utmost importance. That means that tests and performance profiles for code that were done six months ago could be obsolete. The major companies try to alleviate this confusion somewhat with docs providing insight into the engine (Chrome[1][2], Firefox[3][4], Edge, Safari[5]) and future direction of development, but there are no guarantees. Your ideal machine could suddenly, and by no action of your own, no longer be ideal.
For example, not that long ago, using an array.join() to build large strings was best practice. Today, brute-force concatenation is wildly faster. Or for a more conceptual example, tail call optimization. This is a major part of functional programming. It is part of the ES6 spec. Chrome had it available, but it has since been pulled from Firefox, Chrome, Node, and Edge. Only Safari supports it.
Contrast this with the relatively stable internal implementation of things in Python. Yes, Python can be woefully slow for some operations, but how Python will run is much better known than JavaScript. Source: Aaron Martin Colby
I see this as the key problem for JavaScript, even in an idealized form.
What should beginner programmers know about software testing?
It exists.
It takes time.
It requires culture and discipline.
Unit testing is what takes the least time.
Hours writing an automated test is time invested, not time wasted.
Once into it, you would not believe that a while ago you were not taking testing seriously enough.
Testing allows the programmer (either the one who wrote the code initially or a new programmer) to refactor the code without as much fear of breaking something.
R is an environment for developing and implementing statistics and data analysis. The newest methods are overwhelmingly written in R.
Python is a general purpose programming language. It has lots of stat capabilities, but is, AFAIK, used much, much less for the development of new methods. By Peter Flom
Do some software engineers fall into the trap of copying code from Stack Overflow that solves their problem without understanding how the code actually works?
I think that “copying code” is extremely rare among stronger developers, but it seemingly must be something that happens given the number of memes that reference it.
I’ve also seen people post that “it’s faster to copy the code than to write it.” This frankly shocks me. I can’t even imagine how a search for the exact bit of code that you need could possibly be faster than just writing the code.
I mean, there do exist some pretty hairy algorithms that would be hard to get right in one go. And if you can’t find a library, then in those rare cases starting with working code might make sense.
But I’m talking a once per year kind of exceptional experience, if that. And given that I’ve also seen people claim that all software developers really need to know is how to iterate over lists and concatenate strings, I really doubt any of the really complex algorithms are what people are picking up.
So hear’s the deal: If it’s faster for a developer to look up the code than to write it, what are the odds they will actually be able to fully understand it? They didn’t write it, so they certainly don’t understand it as well as if they did write it.
One final note: A few times recently I’ve seen a comparison between copying and pasting code and using libraries. It’s profoundly different from using a library. A library I would choose would:
Be tested for corner cases and not just demonstrate a technique
Be reviewed for security vulnerabilities
Be verified by unit and system tests
Be used in hundreds of projects, ensuring that it works in many situations
Be updated frequently when any problems or security flaws are discovered, which will trigger a warning (and actually send me an email) telling me that the library needs to be patched and why.
I think that one of the major anti-patterns common in the PHP world is to copy and paste code in preference to installing libraries. You end up with millions of copy-pasted security holes that are literally millions of times more difficult to find and fix.
So no, it’s not the same by any stretch. Copying code from StackOverflow is an anti-pattern. Looking at code to see how a library or method is intended to be used is fine. Looking for docs on a language is fine. I’m not against using the internet as a reference. Source.
No. Just Python will not be enough to land a job. You need 5 more things.
1. Companies don’t hire a Python dev. They hire a problem-solver.
If you have learned X and can’t do Y with the concepts you learned from X, you will not get hired. It’s impossible to know what problems you have to solve when you get hired or what problem you will be solving 2/3 years from now. That’s why companies look for people who can take any problem and solve it by using coding techniques.
For example, you have learned the dictionary data structure. Now, if I give you a new situation (car dealership, book club, grocery store, or bank software, etc.) and you don’t know how to use the dictionary data structure in that situation, you will not get hired.
So,
Don’t just learn coding. Pay attention to why you are doing certain things. What else you could do to solve the problem.
What should you absolutely never do when using Python? Python Do’s and Don’t
Don’t do this:
a = []
for i in range(x):
if i % 2 == 0:
a.append(i)
Rather do this:
a = [i for i in range(x) if i%2 == 0]
2. Don’t do this:
arr = [‘This’,’is’,’a’,’sentence’]
s = ”
for i in range(len(arr)-1):
s = s+arr[i]+’ ‘
s = s+arr[-1]
#rather do this:
s = (‘ ‘).join(arr)#This is a sentence
3. Don’t do this:
name = ‘Tyler’
level = 15
rank = ‘Supreme’
#instead of doing this:
print(‘Hello ‘+name+’, you are on level ‘+str(level)+’ and your rank is ‘+rank+’.’)s
rather do this:
print(‘Hello {}, you are on level {} and your rank is {}.’.format(name,level,rank))
4. Don’t do something that has been already done, I mean use libraries (if there are) instead of doing something from scratch.
5. Don’t do this:
if a > 5:
v = True
else:
v = False
Do this:
v = a > 5 #it sets v to true if a > 5 else False
This one is only restricted to Booleans
lets say you wanted it to be either yes or no instead of true or false:
v = ‘Yes’ if a > 5 else ‘No’
You can take this a step further and do this:
v = (‘No’,’Yes’)[a>5]
Here a>5 can either be false(0) or true(1), so if its true, it will return the element at index 1(‘Yes’) and return ‘No’ if a > 5 returns false(index 0).
6. Don’t do something like this:
if a == True:
b = False
if a == False:
b = True
# rather do this
b = not a
7. You can use libraries instead of doing the stuff from scratch
YOU DON’T NEED TO REINVENT THE WHEEL
There are a vast number of libraries out there
8. Always use functions instead of copy pasting the code over and over again
9. Don’t do this:
name = ‘Tyler’
level = 15
rank = ‘Supreme’
#instead of doing this:
print(‘Hello ‘+name+’, you are on level ‘+str(level)+’ and your rank is ‘+rank+’.’)s
rather do this:
print(‘Hello {}, you are on level {} and your rank is {}.’.format(name,level,rank))
Last but not least. Don’t feel embarrassed if you can’t understand something. A problem with new programmers is that they hesitate to ask something. You can ask someone online if you don’t understand something. There are websites like: Stack Overflow, Quora, Reddit and other forums. Always feel free to post your question. This is for all programming languages not just python.
2. Companies don’t hire a single skill. They hire a set of skills.
Just python is like plain coffee. It doesn’t taste good. You need to add milk, sugar, caramel to make it tasty. Similarly, don’t just learn python. Instead, you have to learn a little bit about other programming languages. You don’t have to be master at those. However, you need to know a little bit.
To build web development using python, you need to know HTML, CSS, and Javascript. Without your basic understanding in HTML, CSS, and Javascript you won’t be able to master in python frameworks like Django, Flask, etc.
You must learn a little bit about Database (SQL). How to structure a table. How to query data from a table. How to join data from two tables.
If you want to become a Machine learning developer, you need to know the basics of Mathematical modeling, how to train a model and what are the different modeling approaches.
Also, you could be just the front-end developer or the just database guy. However, you need to know how the full-stack software development works. How front-end, back-end, and database are connected.
.
3. Don’t just learn Python. Learn the overall Software Development process.
Unfortunately, most of the companies don’t want to spend time to train you about the overall software development process. That’s why you will hear companies are looking for X years of experience. To compete with that requirement…
So,
Build full-scale projects. Have at least 3 projects on your Github
Don’t just copy the project from somewhere. Instead, try to build them yourself. While developing the project, you will get stuck numerous times. Try to find out solutions online. Struggles to find out the solution will make you a better developer.
Deploy your projects on some servers. It could be Heroku or somewhere else.
Get familiar with popular Python frameworks like Numpy, Pandas, Srapy, Django, etc. Play with those. Use them in some projects
Write unit tests. Put enough comments on your code. Know how to organize code. Find out Python best practices like PEP 8 — Style Guide
Master at least one IDE. Learn keyboard shortcuts.
In most programming languages, why do I have to write “x > 30 AND x < 100” and not “30 < x < 100”?
The first expression is much easier to parse than the other one. It’s just three binary operators combined together. The second expression doesn’t work like that. After reading the first bit, you can’t just say “okay, this is a binary comparison operator”, you need to continue reading forward to determine how to proceed.
Not impossible, just extra difficult for very little gain. If you want to cover just this specific case, and not an arbitrary string of chained comparisons, you can achieve it easily with containment and range operators, like x in 30..100.
I personally avoid using bare new whenever possible. Switching to std::make_unique makes it easier to avoid subtle leak situations by guaranteeing every allocation immediately has an owner that will delete it. This is particularly true in environments that allow exceptions.
If you have a legacy codebase that you contribute to, follow its norms. Otherwise, I strongly encourage using std::unique_ptr to track ownership, and avoid bare new.
You can (and should!) use raw pointers and references to pass objects around. Use std::unique_ptr to manage ownership only. Use std::unique_ptr in interfaces that manage ownership transfer.
Use std::shared_ptr if ownership is shared among threads, or in rare cases where you need more complex lifetime management. The same caveats apply: use it to manage ownership, and to highlight ownership transfer in interfaces. Source
Is it better to write clear but slightly inefficient code or abstruse but optimized code?
One assignment was given as “write the most efficient Scheme code you can to compute the 100th Fibonacci number.” He promised that the person who wrote the most efficient code would get some prize (bonus points? I don’t remember what it actually was.)
There of course are many ways to write such a program. The naive implementation usually involves doubly recursive calls, and might look something like this:
This is a pretty clear implementation, and for many purposes (perhaps up to n=20 or so with modern computer) it might even be “fast enough.” But if you consider how it works, it basically computes by just adding up one fib(n) times. By the time n == 50, that’s 12,586,269,025 which of course was a lot of ones to add up, and took a fair amount of time (the growth of fib(n) is exponential), and this performs fib(n) adds.
It’s not hard to come up with an algorithm which exhibits linear behavior (assuming (falsely, but good enough for this argument) constant time additions). It looks something like this (again in Scheme, using tail recursion, and with a helper function):
I’d submit that this is actually a bit more tricky than the code above to understand, and takes a modicum of thought to both come up with, but it works pretty well. It isn’t hard to see that the loop procedure is called exactly n times, so it is much better behaved.
But can we do better? As it happens, yes.
This post is already long, but you can read about the method that I chose here: Fast Fibonacci algorithms under the heading of “matrix exponentiation”. You can compute the nth Fibonacci number in O(log n) operations (operations being 2×2 matrix multiplies). Numbers can be raised to large powers by using a divide and conquer algorithm. If a number is raised to an even power, we can compute it by just squaring the number raised to have that power. If the number is odd, we can just subtract one, use the above trick, and then multiply the result once more. This gives you log(n) recursive calls, each of which does (again, assuming constant speed arithmetic) constant work. Huzzah.
I coded this up. it worked rather well. I could compute fib(n) for pretty large n in fairly modest time. The code was actually fairly pretty and well documented. I did a couple of small tweaks to make it go slightly faster. The code to multiply two 2×2 matrices could be made slightly faster by taking advantage of the symmetries. A few other tweaks helped modestly, but I didn’t adopt any kind of loop unrolling or the like.
This story is getting long. Skip ahead.
After the assignments were graded, he had four of us write our solution on the board. I was selected. I was particularly proud of my solution.
A couple showed minor tweaks of the exponential or linear time codes. He pointed out particular aspects as being noteworthy, but mentioned you could do a lot better.
He then asked me to explain my code. I took a couple of minutes, and explained the fast exponentiation algorithm, and how it computed, and what I expected the time to be. He said “well done, and clearly written.”
He then proceeded to the fourth example. It was easily twice as long as mine, with basically all the matrix multiplies explicitly unrolled into a long, confusing pile of code, with iterative calls that shuffled eight or so variables around. It wasn’t at all clear from the code what was going on, and while I had some clue, I would have hated to debug it.
Will awarded the “best program to him.”
I was unconvinced. Mine was clearly easier to read, and I suspected that all this “optimization” didn’t do squat in exchange for making the code impossible to read and to maintain. I said so in class. Will patiently explained that wasn’t asking for the clearest code, but rather the fastest code. So I naturally asked “well, how much faster is it than mine?”
It was then apparent that he never had timed the programs. He had verified that the code gave the right answer, but he actually had never actually timed them. I suggested that we return to his office and do so.
To his credit, he did so. And we discovered that not only was my code clearer, it was significantly faster. As in twice as fast.
While trying to understand why that must be so, we uncovered what the issue was. In the midst of all the unrolling that he did, his code did one extra function evaluation. In essence, he computed fib(n+1) before just returning fib(n). And as it happens, that one extra evaluation cost a significant fraction of the total time to compute fib(n), because bignum arithmetic is not constant time. My code didn’t do that operation, so it was a lot faster.
In other words, in an attempt to make optimized code, he had inadvertently inserted code which wasn’t a bug exactly (the code returned the right answer) but which didn’t perform as well as code that was clearly written.
I claimed a moral victory for myself, although my recollection is that Will didn’t agree with me, and said that “well, when the bug is fixed his is faster” which was true, but again, I would submit as irrelevant.
End of the parable. I learned a couple of important lessons.
My belief in writing readable code first was justified. The choice of proper algorithm gave me virtually all of the speed savings I needed. Additional tweaking that reduced readability to get statistically insignificant gains were not justifiable.
If you aren’t timing, you aren’t optimizing. Will had a preconception about the code performance, but it didn’t match what we measured when the code was actually run. If you aren’t profiling, you are wasting your time “optimizing.” You can only optimize what can be measured, and you have to do the measurements to do optimization.
Ultimately, programs are written as much to be read by programmers as run by machines. Clarity and correctness are almost always the primary consideration, and choosing the right algorithm and approach is often far more important for performance than shuffling the deck chairs in exactly the right way. Source.
Let’s say you’re back in time and want to learn data structures and algorithms, how will you start, and why?
To allocate an array of size n in C without initialization, does it take O(1) or O(n) time?
A precise answer depends on the implementation of malloc() that your compiler/operating system uses, but to first approximation, the answer is NEITHER.
On the one hand, for most dynamic memory-management algorithms, the time required to allocate an uninitialized block of memory is independent of the size of the allocated block. On the other hand, the time required to allocate a block usually does depend on the pattern of previous allocations and deallocations.
For example:
If your memory system maintains a simple linked list of freed blocks, a single call to malloc() could require Θ(F)Θ(F) time in the worst case, where FF is the number of earlier calls to free(). Let me emphasize that my variable FF has no relationship whatsoever to your variable nn.
If your system uses buddy memory allocation, a single call to malloc() requires Θ(log(M/n))Θ(log(M/n)) time in the worst case, where MM is the total size of allocatable memory. The only relationship between MM and the block size nn is the trivial inequality n≤Mn≤M. In particular, larger blocks are allocated more quickly in the worst case than smaller blocks!
Many common memory-management schemes, like Doug Lea’s dlmalloc(), have been observed to run in a small number of instructions on average, in practice. There are also more specialized memory-management schemes like TLSF that provably support malloc() in O(1)O(1) worst-case time. In light of these algorithms, it is reasonable to assume, for purposes of crude theoretical analysis, that each call to malloc and free requires O(1)O(1) amortized time.
That crude theoretical model usually works well for programs that are CPU-bound, or that are memory-bound but primarily use static or fixed-size allocation. But if dynamic memory management is actually a significant contributor to your code’s running time, you probably need to take off the big-Oh glasses and measure the performance experimentally.
Why do they say, “ints are not integers and floats are not real”?
They say it because it’s true!
For this answer, I will assume types similar to int and float in C or C++. What I describe, though, is true for corresponding data types in many other programming languages, possibly after tweaking some details. It also applies to other integer and floating point types once you adjust the numeric ranges.
Without further ado…
Quick: how many digits of precision does a RR have? What’s the largest ZZ? Those questions don’t quite make sense do they? But they seem a lot more sensible for int and float, don’t they?
To be precise, when someone says “int are not integers and float are not real,” they’re saying:
And as I mention in the Addendum, we can tighten this further as: float⊊Z[12]∪{+0,−0,+Inf,−Inf,NaN(a)}float⊊Z[12]∪{+0,−0,+Inf,−Inf,NaN(a)}
Digging in…
Integers vs. int
For int≢Zint≢Z: The int data type in C and C++ has a limited range. Suppose you have 32-bit 2’s complement int. Those can hold integers the range [−231,231−1][−231,231−1]. You can make similar statements for int types with different sizes and representations.
Thus, int are a proper subset of integers. Every int value is an integer, but not every integer fits in an int.
Real vs. float
For float≢Rfloat≢R, consider that 0.1+0.2=0.30.1+0.2=0.3 in RR, but the same isn’t true with float.
Try it! Then hop on over to this answer for why:
Why is 0.1+0.2 not equal to 0.3 in most programming languages?
Computers implement a wide range of arithmetic schemes. In some, such as decimal floating point and rational arithmetic, 0.1 + 0.2 does equal 0.3. One computer I own uses radix-100 floating point, and for it, 0.1 + 0.2 = 0.3 as well. Now, in binary floating point arithmetic, including the ubiquitous version defined by IEEE-754 floating point standard, it is true: 0.1 + 0.2 ≠ 0.3. I explain the math in the answer below, working the example in double precision. It’s similar for single precision. Most programs these days use IEEE floating point by default. Programs can choose to implement other forms of arithmetic, including rational arithmetic and decimal floating point. I’ve written a few other answers that discuss how binary floating point arithmetic works, in case you want to read up on it.
The IEEE 754 binary32 data type (the most common representation for float these days) is closer to being a quirky subset of the rationals, QQ.
Lets ignore special values such as NaN(a)NaN(a), subnormals, +0+0, −0−0, +Inf+Inf, and −Inf−Inf for a moment. Each of the remaining values is called a normal value, and is the product of a constrained integer (the significand) and a constrained power of 2 (the exponent).
The significand is constrained to the disjoint ranges[−224+1,−223][−224+1,−223] and [223,224−1][223,224−1].
The exponent is constrained to 2E−23,−126≤E≤1272E−23,−126≤E≤127.
Because some of those exponents are negative, the resulting number is in QQ.
Subnormal values are also in QQ. Like normals, the significand had two disjoint ranges: [−223+1,−1][−223+1,−1] and [1,223−1][1,223−1]. The exponent is fixed at 2−1492−149.
The remaining special numbers +0+0, −0−0, +Inf+Inf, and −Inf−Inf don’t quite slot into QQ or RR. The infinities behave somewhat like +∞+∞ and −∞−∞, and in fact I usually write them that way to save typing. And the signed zeros +0+0 and −0−0 both behave mathematically like 0 nearly everywhere.
But really, the values±Inf±Inf stand in for all values too large to express in the type. You can have arithmetic which results in a finite value in RR or QQ, and yet is InfInf in a float.
And the values ±0±0 essentially stand in for all the values whose magnitude is too small to express in the type. We at least get to remember their signs. Their signs aren’t visible most of the time; however, +1/+0=+Inf+1/+0=+Inf and +1/−0=−Inf+1/−0=−Inf.
And then there’s NaN(a)NaN(a). These are Not-a-Numbers, and as their name suggests, they are not numbers. They come in two flavors (signaling and quiet), and can have a payload. For now I have abstracted those details away in an abstract argument aa. Because they aren’t numbers, they aren’t in RR or any other set of numbers.
So, float values are not a proper subset of any other numeric category, because some float values aren’t numbers, and some of the numeric values have special properties. At the very least, we can say normals and subnormals are a proper subset of QQ. Beyond that, “it’s complicated.”
Addendum:
It turns out that the set of rationals with power-of-2 denominators is known as the dyadic rationals, and these form a ring denoted by Z[1/2].
The set of floating point normals and subnormals are thus a proper subset of Z[1/2].
I’m a competitive programmer and I had spent a lot of time learning algorithms and techniques that you will never use in real life programming. However, let me tell you something, currently, I’m starting to learn about android development and most of the people I know spent a LOT more time on learning concepts that only took me 1–2 days to learn.
I think the benefits of Competitive programming boils down into training your mind to think faster and to think in new ways that no other programmer is capable of. It’s like when you are an ex-footballer, you can easily enter the domain of basketball if you want to, because you already have the muscular mass and the agility needed to perform these kind of sports and the only thing you need to focus on is what are the rules of basketball, how to use you hands instead of you legs, how a achieve some goals … etc. Thus, competitive programming help you build a solid base of computer science knowledge that will give you great benefits in the future when you want to learn anything simpler or relatively easier.
I never regret wasting time on competitive programming and I still compete from time to time in online contests. Source: Andree Kaba
As a software engineer, do you feel the biggest advantage of unit tests is regression testing the code?
For me, the biggest advantage is getting really fast design feedback. I’m able to think about how my code should be connected to other parts of the system in ways that make it easy to split apart. I’m able to run that code in isolation, without spinning up the rest of the system.
At that point, I can be confident that the public interface to that code is making it easy to work with. I can then dive in to make sure that all the logic details work as I expect – not by reading the code, but by running the code and having an automatic pass/fail check.
Regression tests are useful and important. They are way down the list for me, though. Tests are part and parcel of how I think about code:
Why should all the unit tests be independent of each other?
Two reasons:
easier to identify the root cause of a failure when there is only one reason it can fail
easier to understand and add test cases when you don’t have to consider history
Making tests depend on the state your application is in before you run the test is a major problem. It makes the tests less repeatable. They become less clear to understand – what exactly are they testing? It’s not self-describing in the test case itself – you have to ‘know’ what state the application is in. By Alan Mellor
Do software engineers learn profiling/monitoring techniques on the job, and is it not generally covered in the computer science curriculum?
Software engineers learn most of their skills on the job. A lot of hard lessons come from bugs and outages. When there is an outage, large tech organizations require that a document must be written and reviewed, so if there was an easy fix that could have prevented that outage, or that makes it less likely, when this affects things you are responsible for, you definitely remember this. Source
What is the fastest way to read all the bytes of a 50GB file?
If you need to access the entire contents of the file in non-sequential order, repeatedly, why not get a machine with 64GB of memory and read the entire file into memory once, then keep it locked in memory and do all the actual work there? In that case, reading it is a fixed cost at program start. Does it really matter if it’s super fast?
Can the file be seen as a sequence of records that can be operated on individually? If so, it’s probably far better to split the file into 2GB shards and distribute those over multiple SSDs. Then work on those shards in parallel, e.g. using 24 cores on a single machine. Depending on the workload, you might be constrained by the combined maximum width of all buses involved, and you may get higher throughput by distributing the work across several machines.
Are you really really sure that you need to read the entire file? Again, you need to answer the question of what you actually want to do after you’ve read the file. Depending on what that is, it may turn out to be faster to build an external index offline, read that, and use it to only access the parts of the larger file you actually need.
To recap, taken literally the question has a trivial answer: Don’t read a file if you’re not going to do anything with it. If you are, though, then the actual computation you are planning to do will inform the organization of the data, including where and how to store the data (a single 50GB file may not be the best idea) and how to access it for maximum throughput. Source.
Can a machine code like a human programmer?
Not for general purpose programming – but for certain very constrained tasks, that’s a routine thing to do.
EXAMPLE OF WHERE COMPUTERS ALREADY DO THAT:
For example, a “compiler” for a human programming language like (say) C++ has the task of writing a machine-code program that does exactly what the C++ code would do. Compilers are now MUCH better at doing that than human programmers are.
So if you wanted some kind of an AI machine to write code – you’d need a way to precisely describe what you wanted it to do under all circumstances.
EXAMPLE OF WHERE IT WOULD BE VERY DIFFICULT:
For example: “Hey computer – write me a program to take a sentence and reverse the order of all of the letters in each word.”
That SOUNDS like something that an artificial intelligence would be able to write – but it turns out that it couldn’t. The difficulty is that the specification for this problem is “under-specified”.
For example, should we consider “under-specified” to be one word or two? Do you want the answer to be redun-deificeps or deificeps-rudun? This matters. Do you want the compound word “afterlife” to be reversed like “after” and “life” or as one word? What about the sentence:
“Pi is approximately 3.1415926”
Do you want the number reversed? Is it a “word”?
Do you want the capital letter on the first word of a sentence to be un-capitalized and the new first letter of the first word to be capitalized instead?
“Hello world” => “Olleh dlrow” or “olleH dlrow” ?
THIS MAY SOUND TRIVIAL BUT…
“Hey computer – write me a program to drive a car.” – ends up being a MASSIVE specificational nightmare – details of how it should obey the law – and examples where disobeying the law is necessary to avoid killing a pedestrian.
Tesla’s task of building an actual, for real, self driving car by training an AI
THE POINT BEING:
In order to have an AI translate your requirement into a program that will actually WORK – you need to describe the problem precisely.
Computer programming languages are (in a sense) very rigid, specific, unambiguous ways to tell the computer what machine-code program you’d like it to write for you.
A lot of what human programmers do is to think about these kinds of issues…and writing the actual code isn’t all that big of a deal. Source.
What is the strangest sorting algorithm?
Invented on 4chan by some anonymous poster, I bring you sleep sort.
The algorithm basically works like this:
For every element x in an array, start a new program that:
Sleeps for x seconds
Prints out x
The clock starts on all the elements at the same time.
It works for any array that has non-negative numbers.
Not every day that you invent a sorting algorithm on an online forum.
A self replicating piece of code which can be written incredibly simply in almost all programming languages and will grind most machines to a halt in no time at all due to the nature of exponential increase.
Here it is in basic C. All the program does is create another program, over and over again until all resources are extinguished – usually by simply filling up the Operating System’s process table.
#include <unistd.h>
int main(void)
{
while(1) fork();
}
As noted in the Wikipedia example, careful use of ulimits for non root users on *nix machines can help protect against this.
Another example of mostly unintentionally dangerous code is the humble Off-by-one errorwhich is probably one of the most common causes of security flaws in modern software. This is where programmers pay insufficient attention to the extent of memory they have allocated, or don’t guard its limits correctly and someone is able to (accidentally or deliberately) inject bytes where they should not be, with unpredictable errors resulting, or crashes, or potentially full exploits of the host machine.
As a software engineer, what is the weirdest feature you were ever asked to create?
I was working as a game programmer and the score for our game was displayed in the corner, but lacked delimiters so it was hard to read. That is, it displayed the score like this:
1000000
Instead of this:
1,000,000
It irritated me, so I wrote a function to fix it. I wasn’t feeling very creative that day, so I called it SlapCommasInThisHereString(). I figured I’d change it later. I checked it in and moved on.
When the Lead Programmer saw it, he flipped. “Hey, who put in this function?” Sheepishly, I fessed up.
“That’s freaking awesome, man! That’s the most literal function name I’ve ever seen! You get a gold star, Chris.” Source.
Which software’s code awes you with its sheer audacity and brilliance?
LLVM.
If you’ve never heard of it before, LLVM is an acronym for “low-level virtual machine” and it is arguably the single most significant innovation in the development of compilers to this day.
Prior to LLVM, if you wanted to write a compiler for a programming language, you first had to write a front end and then a unique backend for every architecture that you wanted to target; that means that if you wanted to target both x86 and ARM for example, then you would have to write two almost entirely different backends because x86 assembly is obviously different from ARM assembly. These days with LLVM, you just write a single front end that compiles to LLVM IR and you’re basically done; it’s pretty amazing.
And LLVM basically runs the world: Google uses it to compile their C++ code, the Rust compiler uses it, Apple uses it for Swift, Oracle is using it for their new GraalVM for Java and it even provides a JIT compiler too, which coincidentally also happens to be the same JIT compiler that Julia uses; it can even target freakin’ GPUs like it does in the case of Nvidia’s CUDA. Source.
How does Lambda calculus relate to functional programming?
It’s quite simple. Functional programming is an attempt to program using mathematical functions (ones with no side effects) rather than devices focused on state, because state is messy and easily leads to certain kinds of errors. And recursion lets one do things that are inductive (which is generally why one needs state).
So, the lambda calculus let’s one talk about recursion and what is bound to what when. Little functions that have variables that recurse and capture the evolution of state, without having one global (and thus messy and hard to reason about) state. It’s the mathematical basis for being able to do functional programming.
Functional programming is the way one writes programs using that as a basis. And, one sees the connection when one writes closures and lambda functions, which are functions that don’t have a name. However, they are one of Church’s (the inventor of the lambda calculus) inventions. So, functional programming is a practical application of the lambda calculus theory.
To see this in other words, read Ian Joyner’s answer. There is the theory (computation) and the practice (computers). They go hand in hand, but are not the same. Source
I learned data structures and algorithms but I always fail to solve a question by myself and in online assessments, the questions are getting harder and harder. What should I do?
Learning a data structure means being able to readily identify
Why it exists
What are its advantages
What are its disadvantages
Likewise, solving a problem means being able to identify all of its variants. Very often, the same problem can be worded to sound like a very different problem, but the underlying solution remains the same. If you cannot identify those variants then you haven’t really learnt the underlying data structure or algorithm that solved the problem.
The commonly used data structures and algorithms are limited and they have been thoroughly researched and documented over the years. Beyond a point, there is no scope for innovation in those areas unless for academic purposes.
If you are finding endless stream of new problems that is likely because you are unable to map them to the old problems you already studied. Source.
Why might a developer declare a member function as private? What is the reason?
To tell other developers – including themselves in the future – that this function is an unimportant detail of how the code is implemented today.
This tells others that they are free to change that implementation. They are free to change, add, modify anything that is truly private, without fear of breaking anything else. The changes will be contained and will not ripple out all over the system
Had that method been made public, that suggest it is an integral part of how the code is to be used. There may be many other pieces of code that depend on it staying the same. Or may also need changing along with any changes made to the function itself.
public vs private is of huge importance in writing readable code. Source
How is Google Maps so fast?
Oh, my, first time I heard someone complaining about something being done in a computer that is “so fast”, usually is the other way around.
Well, for images, they use pyramids, which means that the images are stored at different resolutions in sections or areas.
For this to work, the algorithm that presents those images stores them in your device, so the next time you zoom in or out they are already in memory.
For route searching, each segment of the route, which is the link or street between street corners, is a segment in a node graph and you can use a fast algorithm like Dijkstra or A* to look for the route.
Now, a “meta” thingie about the answer (basis of modern metaverses).
As with Google, the search algorithm, Google Maps “knows” in advance the most common searches and most commonly retrieved areas of the world so it can present faster those particular results.
When a million people daily look for “Eiffel Tower” while one person looks for “Buyumbura”, then the information about Paris is kept on RAM and on a fast server while the info about Burundi is kept on disk on a slower server.
Those searches and paths are themselves treated as Dijkstra searches because humans are kind of predictable so AI can quickly define your curiosity and present faster the most common one. Source
Is the printf function a version of fprintf that does not need to have the file pointer passed as the first parameter because it always writes to the file stdout?
Congratulations on finding one of the longest running C Library easter eggs!
This feature of printf(…) being the same thing as fprintf(stdout, …) is one of the few features that C imported from ocaml. This is exactly a function which is a partial evaluation of another function.
I would add, though, that ocaml partial evaluation is more useful, because you can make (the equivalent of) a version of fprintf that prints to an arbitrary stream, not just stdout. Source: Here
Why do most JavaScript tutorials always say not to use for loops, is that not wrong, considering most algorithms are built around for loops?
You probably didn’t read the whole sentence. Most likely they’re saying “do not use for loops, instead use forEach/map/filter, when the performance budget allows for it”.
For example, like this:
const arr = [1, 2, 3, 4];
arr.forEach(n => console.log(n));
This prints all the numbers, without having to worry to get the start and end indices right and off-by-one errors and stuff. It’s basically a simplified version of this:
for (let i = 0; i < arr.length; i++) {
console.log(arr[i]);
}
Or, like this, to print only the even numbers:
arr.filter(n => n % 2 == 0).forEach(n => console.log(n));
Which is a simplified version of:
for (let i = 0; i < arr.length; i++) {
if (arr[i] % 2 == 0) {
console.log(arr[i]);
}
}
Obviously sometimes forEach() won’t do, e.g. when you need some more complex iteration instead of just going through every element, or when your code needs to run as fast as possible, e.g. because it’s part of a game engine, but, when you can use it, it’s definitely the better option, there’s just less room for mistakes with it. Source
What is the fastest scripting language on the server side?
Javascript (or more precisely ECMAScript). And it’s a lot faster than the others. Surprised?
When in 2009 I heard about Node.js, I though that people had lost their mind to use Javascript on the server side. But I had to change my mind.
Node.js is lighting fast. Why? First of all because it is async but with V8, the open source engine of Google Chrome, even the Javascript language itself become incredibly fast. The war of the browsers brought us hyper-optimized Javascript interpreters/compilers.
Regarding the language Javascript is not the most elegant language but it is definitely a lot better than what some people may think. The current version of Javascript (or better ECMAScript as specified in ECMA-262 5th edition) is good. If you adopt “use strict”, some strange and unwanted behaviors of the language are eliminated. Harmony, the codename for a future version, is going to be even better and add some extra syntactical sugar similar to some Python’s constructs.
Does Javascript still sound too archaic? Try Coffeescript (from the same author of Backbone.js) that compiles to Javascript. Coffescript makes cleaner, easier and more concise programming on environments that use Javascript (i.e. the browser and Node.js). It’s a relatively new language that is not perfect yet but it is getting better: http://coffeescript.org/
Why do tech giants like Google, Amazon, and Facebook use C++ for their back-end? What are the advantages of using C++ against other languages?
In general, the important advantage of C++ is that it uses computers very efficiently, and offers developers a lot of control over expensive operations like dynamic memory management. Writing in C++ versus Java or python is the difference between spinning up 1,000 cloud instances versus 10,000. The cost savings in electricity alone justifies the cost of hiring specialist programmers and dealing with the difficulties of writing good C++ code. Source
To a modern day programmer, would you recommend C++ over Rust? Why or why not?
You really need to understand C++ pretty well to have any idea why Rust is the way it is. If you only want to work at Mozilla, learn Rust. Otherwise learn C++ and then switch to Rust if it breaks out and becomes more popular.
Rust is one step forward and two steps back from C++. Embedding the notion of ownership in the language is an obvious improvement over C++. Yay. But Rust doesn’t have exceptions. Instead, it has a bunch of strange little features to provide the RAII’ish behavior that makes C++ really useful. I think on average people don’t know how to teach or how to use exceptions even still. It’s too soon to abandon this feature of C++. Source: Kurt Guntheroth
What is the most common field in computer science?
Java or Javascript-based web applications are the most common. (Yuk!) And, consequently, you’ll be a “dime a dozen” programmer if that’s what you do.
On the hand, (C++ or C) embedded system programming (i.e. hardware-based software), high-capacity backend servers in data centers, internet router software, factory automation/robotics software, and other operating system software are the least common, and consequently the most in demand. Source: Steven Ussery
Your first language doesn’t matter very much. Both Java and Python are common choices. Python is more immediately useful, I would say.
When you are learning to program, you are learning a whole bunch of things simultaneously:
How to program
How to debug programs that aren’t working
How to use programming tools
A language
How to learn programming languages
How to think about programming
How to manage your code so you don’t paint yourself into corners, or end up with an unmanageable mess
How to read documentation
Beginners often focus too much on their first language. It’s necessary, because you can’t learn any of the others without that, but you can’t learn how to learn languages without learning several… and that means any professional knows a bunch and can pick up more as required. Source: Andrew McGregor
Is it worth learning Java now that Node.js exists?
Absolutely.
If you’re a backend or full-stack engineer, it’s reasonable to focus on your preferred tech, but you’ll be expected to have at least some familiarity with Java, C#, Python, PHP, bash, Docker, HTML/CSS…
And, you need to be good with SQL.
That’s the minimum you should achieve.
The more you know, the more employable — and valuable to your employer or clients — you will be.
Also, languages and platforms are tools. Some tools are more appropriate to some tasks than others.
That means sometimes Node.js is the preferred choice to meet the requirements, and sometimes Java is a better choice — after considering the inevitable trade-offs with every technical decision. Source: Dave Voohis
Which language should I learn for back-end web development in 2022?
Just one?
No, no, that’s not how it works.
To be a competent back-end developer, you need to know at least one of the major, core, back-end programming languages — Java (and its major frameworks, Spring and Hibernate) and/or C# (and its major frameworks, .NET Core and Entity Framework.)
You might want to have passing familiarity with the up-and-coming Go.
You need to know SQL. You can’t even begin to do back-end development without it. But don’t bother learning NoSQL tools until you need to use them.
You should be familiar with the major cloud platforms, AWS and Azure. Others you can pick up if and as needed.
Know Linux, because most back-end infrastructure runs on Linux and you’ll eventually encounter it, even if it’s often hived away into various cloud-based services.
You should know Python and bash scripts. Understand Apache Web Server configuration. Be familiar with Nginx, and if you’re using Java, have some understanding of how Apache Tomcat works.
Understand containerization. Be good with Docker.
Be familiar with JavaScript and HTML/CSS. You might not have to write them, but you’ll need to support front-end devs and work with them and understand what they do. If you do any Node.js (some of us do a lot, some do none), you’ll need to know JavaScript and/or TypeScript and understand Node.
That’ll do for a start.
But even more important than the above, learn computer science.
Learn it, and you’ll learn that programming languages are implementations of fundamental principles that don’t change, whilst programming languages come and go.
Learn those fundamental principles, and it won’t matter what languages are in the market — you’ll be able to pick up any of them as needed and use them productively. Source: Dave Voohis
As someone new to programming, how do I know that I have fully understood Python syntax and I’m ready to move on to the next step in learning Python?
It sounds like you’re spending too much time studying Python and not enough time writing Python.
The only way to become good at any programming language — and programming in general — is to practice writing code.
It’s like learning to play a musical instrument: Practice is essential.
Try to write simple programs that do simple things. When you get them to work, write more complex programs to do more complex things.
When you get stuck, read documentation, tutorials and other peoples’ code to help you get unstuck.
If you’re still stuck, set aside what you’re stuck on and work on a different program.
But keep writing code. Write a lot of code.
The more code you write, the easier it will become to write more code. Source: Dave Voohis
What is the best language to learn how to code? I’m learning Python. Is it the best to start?
It depends on what you want to do.
If you want to just mess around with programming as a hobby, it’s fine. In fact, it’s pretty good. Since it’s “batteries included”, you can often get a lot done in just a few lines of code. Learn Python 3, not 2.
If you want to be a professional software engineer, Python’s a poor place to start. It’s syntax isn’t terrible, but it’s weird. It’s take on OO is different from almost all other OO languages. It’ll teach you bad habits that you’ll have to unlearn when switching to another language.
If you want to eventually be a professional software engineer, learn another OO language first. I prefer C#, but Java’s a great choice too. If you don’t care about OO, C is a great choice. Nearly all major languages inherited their syntax from C, so most other languages will look familiar if you start there.
C++ is a stretch these days. Learn another OO language first. You’ll probably eventually have to learn JavaScript, but don’t start there. It… just don’t.
So, ya. If you just want to do some hobby coding and write some short scripts and utilities, Python’s fine. If you want to eventually be a pro SE, look elsewhere. Source: Chris Nash
Do you need to master all the small details when learning programming? I want to master C++ do I need to read the whole book sequentially giving care to each and every small detail?
You master a language by using it, not just reading about it and memorizing trivia. You’ll pick up and internalize plenty of trivia anyway while getting real world work done.
Reading books and blogs and whatnot helps, but those are more meaningful if you have real world problems to apply the material to. Otherwise, much of it is likely to go into your eyeballs and ooze right back out of your ears, metaphorically speaking.
I usually don’t dig into all the low level details when reading a programming book, unless it’s specifically needed for a problem I am trying to solve. Or, it caught my curiosity, in which case, satisfying my curiosity is the problem I am trying to solve.
Once you learn the basics, use books and other resources to accelerate you on your journey. What to read, and when, will largely be driven by what you decide to work on.
Bjarne Stroustrup, the creator of C++, has this to say:
And no, I’m not a walking C++ dictionary. I do not keep every technical detail in my head at all times. If I did that, I would be a much poorer programmer. I do keep the main points straight in my head most of the time, and I do know where to find the details when I need them.
Why does software engineering pay so much more than other engineering jobs/careers?
Scale. There is no field other than software where a company can have 2 billion customers, and do it with only a few tens of thousands of employees. The only others that come close are petroleum and banking – both of which are also very highly paid. By David Seidman
What’s the best code you’ve seen a professional programmer write? How does it compare to the average programmer?
Professional programmer’s code:
//Here we address strange issue that was seen on
//production a few times, but is not reproduced
//localy. User can be mysteriously logged out after
//clicking Back button. This seems related to recent
//changes to redirect scheme upon order confirmation.
login(currentUser());
Average programmer’s code:
//Hotfix – don’t ask
login(currentUser());
Professional programmer’s commit message:
Fix memory leak in connection pool
We’ve seen connections leaking from the pool
if any query had already been executed through
it and then exception is thrown.
The root causes was found in ConnectionPool.addExceptionHook()
After first few years of programming, when the urge to put some cool looking construct only you can understand into every block of code wears off, you’ll likely come to the conclusion that these examples are actually the code you want to encounter when opening a new project.
If we look at the apps written by good vs average programmers (not talking about total beginners) the code itself is not that much different, but if small conveniences everywhere allow you to avoid frustration while reading it – it is likely written by a professional.
The only valid measurement of code quality is the WTFs/minutes.
Why is Fortran chosen over C/C++ for simulation software in computational physics?
I worked as an academic in physics for about 10 years, and used Fortran for much of that time. I had to learn Fortran for the job, as I was already fluent in C/C++.
The prevalence of Fortran in computational physics comes down to three factors:
Performance. Yes, Fortran code is typically faster than C/C++ code. One of the main reasons for this is that Fortran compilers are heavily optimised towards making fast code, and the Fortran language spec is designed such that compilers will know what to optimise. It’s possible to make your C program as fast as a Fortran one, but it’s considerably more work to do so.
Convenience. Imagine you want to add a scalar to an array of values – this is the sort of thing we do all the time in physics. In C you’d either need to rely on an external library, or you’d need to write a function for this (leading to verbose code). In Fortran you just add them together, and the scalar is broadcasted across all elements of the array. You can do the same with multiplication and addition of two arrays as well. Fortran was originally the Formula-translator, and therefore makes math operations easy.
Legacy. When you start a PhD, you’re often given some ex-post-doc’s (or professor’s) code as a starting point. Often times this code will be in Fortran (either because of the age of the person, or because they were given Fortran code). Unfortunately sometimes this code is F77, which means that we still have people in their 20s learning F77 (which I think is just wrong these days, as it gives Fortran as a whole a bad name). Source: Erlend Davidson
If a pointer is just a variable that contains memory, then why does it need to know the type of its values?
My friend, if you like C, you are gonna looooove B. B was C’s predecessor language. It’s a lot like C, but for C, Thompson and Ritchie added in data types. Basically, C is for lazy programmers. The only data type in B was determined by the size of a word on the host system. B was for “real-men programmers” who ate Hollerith cards for extra fiber, chewed iron into memory cores when they ran out of RAM, and dreamed in hexadecimal. Variables are evaluated contextually in B, and it doesn’t matter what the hell they contain; they are treated as though they hold integers in integer operations, and as though they hold memory addresses in pointer operations. Basically, B has all of the terseness of an assembly language, without all of the useful tooling that comes along with assembly.
As others indicate, pointers do not hold memory; they hold memory addresses. They are typed because before you go to that memory address, you probably want to know what’s there. Among other issues, how big is “there”? Should you read eight bits? Sixteen? Thirty-two? More? Inquiring minds want to know! Of course, it would also be nice to know whether the element at that address is an individual element or one element in an array, but C is for “slightly real less real men programmers” than B. Java does fully differentiate between scalars and arrays, and therefore is clearly for the weak minded. /jk Source: Joshua Gross
Hidden Features of C#
What are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know?
This isn’t C# per se, but I haven’t seen anyone who really uses System.IO.Path.Combine() to the extent that they should. In fact, the whole Path class is really useful, but no one uses it!
lambdas and type inference are underrated. Lambdas can have multiple statements and they double as a compatible delegate object automatically (just make sure the signature match) as in:
When normalizing strings, it is highly recommended that you use ToUpperInvariant instead of ToLowerInvariant because Microsoft has optimized the code for performing uppercase comparisons.
I remember one time my coworker always changed strings to uppercase before comparing. I’ve always wondered why he does that because I feel it’s more “natural” to convert to lowercase first. After reading the book now I know why.
My favorite trick is using the null coalesce operator and parentheses to automagically instantiate collections for me.
private IList<Foo> _foo;
public IList<Foo> ListOfFoo
{ get { return _foo ?? (_foo = new List<Foo>()); } }
Here are some interesting hidden C# features, in the form of undocumented C# keywords:
__makeref
__reftype
__refvalue
__arglist
These are undocumented C# keywords (even Visual Studio recognizes them!) that were added to for a more efficient boxing/unboxing prior to generics. They work in coordination with the System.TypedReference struct.
There’s also __arglist, which is used for variable length parameter lists.
One thing folks don’t know much about is System.WeakReference — a very useful class that keeps track of an object but still allows the garbage collector to collect it.
The most useful “hidden” feature would be the yield return keyword. It’s not really hidden, but a lot of folks don’t know about it. LINQ is built atop this; it allows for delay-executed queries by generating a state machine under the hood. Raymond Chen recently posted about the internal, gritty details.
Using @ for variable names that are keywords.
var @object = newobject();
var @string = "";
var @if = IpsoFacto();
If you want to exit your program without calling any finally blocks or finalizers use FailFast:
IANA has registered the official MIME Type for JSON as application/json.
When asked about why not text/json, Crockford seems to have said JSON is not really JavaScript nor text and also IANA was more likely to hand out application/* than text/*.
JSON (JavaScript Object Notation) and JSONP (“JSON with padding”) formats seems to be very similar and therefore it might be very confusing which MIME type they should be using. Even though the formats are similar, there are some subtle differences between them.
So whenever in any doubts, I have a very simple approach (which works perfectly fine in most cases), namely, go and check corresponding RFC document.
JSONRFC 4627 (The application/json Media Type for JavaScript Object Notation (JSON)) is a specifications of JSON format. It says in section 6, that the MIME media type for JSON text is
application/json.
JSONP JSONP (“JSON with padding”) is handled different way than JSON, in a browser. JSONP is treated as a regular JavaScript script and therefore it should use application/javascript, the current official MIME type for JavaScript. In many cases, however, text/javascript MIME type will work fine too.
Note that text/javascript has been marked as obsolete by RFC 4329 (Scripting Media Types) document and it is recommended to use application/javascript type instead. However, due to legacy reasons, text/javascript is still widely used and it has cross-browser support (which is not always a case with application/javascript MIME type, especially with older browsers).
What are some mistakes to avoid while learning programming?
Over use of the GOTO statement. Most schools teach this is a NO;NO
Not commenting your code with proper documentation – what exactly does the code do??
Endless LOOP. A structured loop that has NO EXIT point
Overwriting memory – destroying data and/or code. Especially with Dynamic Allocation;Stacks;Queues
Not following discipline – Requirements, Design, Code, Test, Implementation
Moreover complex code should have a BLUEPRINT – Design. That is like saying let’s build a house without a floor plan. Code/Programs that have a requirements and design specification BEFORE writing code tends to have a LOWER error rate. Less time debugging and fixing errors. Source: QUora
The thing that always struck me is that the best programmers I would meet or read all had a couple of things in common.
They didn’t use IDEs, preferring Emacs or Vim.
They all learned or used Functional Programming (Lisp, Haskel, Ocaml)
They all wrote or endorsed some kind of testing, even if it’s just minimal TDD.
They avoided fads and dependencies like a plague.
It is a basic truth that learning Lisp, or any functional programming, will fundamentally change the way you program and think about programming. Source: Quora
Which is better among pair programming and test-driven development?
The two work well together. Both are effective at what they do:
Pairing is a continuous code review, with a human-powered ‘auto suggest’. If you like github copilot, pairing does that with a real brain behind it.
TDD forces you to think about how your code will be used early on in the process. That gives you the chance to code things so they are clear and easy to use
Both of these are ‘shift-left’ activities. In the days of old, code review and testing happened after the code was written. Design happened up front, but separate to coding, so you never got to see if the design was actually codeable properly. By shifting these activities to before the code gets written, we get a much faster feedback loop. That enables us to make corrections and improvements as we go.
Neither is better than each other. They target different parts of the coding challenge. By Alan Mellor
Do software engineers ever feel the need to have more than 2 monitors when they are coding?
Yes, I’ve found that three can be very helpful, especially these days.
Monitor 1: IDE full screen
Monitor 2: Google, JIRA ticket, documentation. Manual Test tools
Monitor 3: Zoom/Teams/Slack/Outlook for general comms
That third monitor becomes almost essential if you are remote pairing, and wnat to see your collaborator n real-time.
My current work is teaching groups in our academy. That also benefits from three monitors: Presenter view, participant view, zoom for chat and hands ups in the group.
I can get away with two monitors. I can even do it with a £3 HDMI fake monitor USB plug. Neither is quite as effective. Source: Alan Mellor
How do you use classes interchangeably when the properties are different (C#, OOP, design patterns, development)?
You make the properties not different. And the key way to do that is by removing the properties completely.
Instead, you tell your objects to do some behaviour.
Say we have three classes full of different data that all needs adding to some report. Make an interface like this:
interface IReportSource {
void includeIn( Report r );
}
so here, all your classes with different data will implement this interface. We can call the method ‘includeIn’ on each of them. We pass in a concrete class Report to that method. This will be the report that is being generated.
Then your first class which used to look like
class ALoadOfData {
get; set; name
get; set; quantity
}
(forgive the rusty/pseudo C# syntax please)
can be translated into:
class ARealObject : IReportSource {
private string name ;
private int quantity ;
void includeIn( Report r ) {
r.addBasicItem( name, quantity );
}
}
You can see how the properties are no longer exposed. They remain encapsulated in the object, available for use inside our includeIn() method. That is now polymorphic, and you would write a custom includeIn() for each kind of class implementing IReportSource. It can then call a suitable method on the Report class, with a suitable number of properties (now hidden; so just fields). By Alan Mellor
2- Bloom filter: Bit array of m bits, initially all set to 0.
To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.
To check if an item is in the set, compute the k indices and check if they are all set to 1.
Of course, this gives some probability of false-positives (according to wikipedia it’s about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.
Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.
3- Rope: It’s a string that allows for cheap prepends, substrings, middle insertions and appends. I’ve really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.
Wikipedia A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).
They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.
If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT’s Introduction to Algorithms lecture on them.
Also, here is a Java applet demonstrating Skip Lists visually.
5– Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.
Bit Arrays store individual bits compactly and allow fast bit operations.
6-Zippers– derivatives of data structures that modify the structure to have a natural notion of ‘cursor’ — current location. These are really useful as they guarantee indicies cannot be out of bound — used, e.g. in the xmonad window manager to track which window has focused.
9- Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it’s a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it’s not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.
10- A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here’s how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v’s neighbor list and for v in u’s neighbor list. Both have size at most 6, so this is O(1).
By the above algorithm, if u and v are neighbors, you won’t have both u in v’s list and v in u’s list. If you need this, just add each node’s missing neighbors to that node’s neighbor list, but store how much of the neighbor list you need to look through for fast lookup.
11-Lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked. They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.
Mike Acton’s (often provocative) blog has some excellent articles on lock-free design and approaches
12- I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan’s Function, if I recall my data structures class correctly).
They’re used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra’s algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.
14- Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it’s the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.
Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.
In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.
Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.
16- Have a look at Finger Trees, especially if you’re a fan of the previously mentioned purely functional data structures. They’re a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.
Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.
A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.
18- I’m surprised no one has mentioned Merkle trees (ie. Hash Trees).
Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.
19- <zvrba> Van Emde-Boas trees
I think it’d be useful to know why they’re cool. In general, the question “why” is the most important to ask 😉
My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.
20- An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.
Is there any way to make interpreted languages such as Python just as fast as C++? Why or why not?
Variable names in languages like Python are not bound to storage locations until run time. That means you have to look up each name to find out what storage it is bound to and what its type is before you can apply an operation like “+” to it. In C++, names are bound to storage at compile time, so no lookup is needed, and the type is fixed at compile time so the compiler can generate machine code with no overhead for interpretation. Late-bound languages will never be as fast as languages bound at compile time.
You could make a language that looks kinda like Python that is compile-time bound and statically typed. You could incrementally compile such a language. But you can also build an environment that incrementally compiles C++ so it would feel a lot like using Python. Try godbolt or tutorialspoint if you want to see this actually working for small programs.
I want to be a computer programmer when I grow up but I don’t have a high IQ. What should I do?
Originally Answered: I wnat to be become a computer programmer when I grow up but I don’t have a high IQ what do I do?
Have I got good news for you! No one has ever asked me my IQ, nor have I ever asked anyone for their IQ. This was true when I was a software engineer, and is true now that I’m a computer scientist.
Try to learn to program. If you can learn in an appropriate environment (a class with a good instructor), go from there. If you fail the first time, adjust your learning approach and try again. If you still can’t, find another future; you probably wouldn’t like computer programming, anyway. If you learn later, that’s fine.
Which are the hardest C++ concepts beginners struggle to understand? How would you have explained them?
Beginners to C++ will consistently struggle with getting a C++ program off the ground. Even “Hello World” can be a challenge. Making a GUI in C++ from scratch? Almost impossible in the beginning.
These 4 areas cannot be learned by any beginner to C++ in 1 day or even 1 month in most cases. These areas challenge nearly all beginners and I have seen cases where it can take a few months to teach.
These are the most fundamental things you need to be able to do to build and produce a program in C++.
Basic Challenge #1: Creating a Program File
Compiling and linking, even in an IDE.
Project settings in an IDE for C++ projects.
Make files, scripts, environment variables affecting compilation.
Basic Challenge #2: Using Other People’s C++ Code
Going outside the STL and using libraries.
Proper library paths in source, file path during compile.
You cannot explain any of them in a way that most persons will pick up right away. You can describe these things by way of analogy, you can even have learners mirror you at the same time you demonstrate them. I’ve done similar things with trainees in a work setting. In the end, it usually requires time on the order of months and years to pick up these things.
What is a list of programming languages ordered from easiest to hardest to learn?
As a professional compiler writer and a student of computers languages and computer architecture this question needs a deeper analysis.
I would proposed the following taxonomy:
1. Assembly code,
2. Implementation languages,
3. Low Level languages and
4. High Level Languages.
Assembly code is where one-for-one translation between source and code.
Macro processors were invented to improve productivity. But to debug a one-for-one listing is needed. The next questions is “What is the hardest Assembly code?” I would vote for the x86–32. It is a very byzantine architecture with a number of mistakes and miss steps. Fortunately the x86–64 cleans up many of these errors.
Implementation languages are languages that are architecture specific but allow a more statement like expression.
There is no “semantic gap” between Assembly code and the machine. Bliss, PL360, and at the first versions of C were in this category. They required the same understanding of the machine as assembly without the pain of assembly. These are hard languages. The semantic gap is only one of syntax.
Next are the Low Level Languages.
Modern “c” firmly fits here. These are languages who’s design was molded about the limitations of computer architecture. FORTRAN, C, Pascal, and Basic are archetypes of these languages. These are easier to learn and use than Assembly and Implementation language. They all have a “Run Time Library” that maintain an execution environment.
As a note, LISP has some syntax, CAR and CDR, which are left over from the IBM 704 it was first implemented on.
Last are the “High Level Languages”.
Languages that require extensive runtime environment to support. Except for Algol, require a “garbage collector” for efficient memory support. The languages are: Algol, SNOBOL4, LISP (and it variants), Java, Smalltalk, Python, Ruby, and Prolog.
Which of these is hardest? I would vote for Prolog with LISP being second. Why? The logical process of “Resolution” has taken me some time learn. Mastery is a long ways away. It is harder than Assembly code? Yes and no. I would never attempt a problem I use Prolog for in Assembly. The order of effort is too big. I find I spend hours writing 20 lines of Prolog which replaces hundreds of lines of SNOBOL4. LISP can be hard unless you have intelligent editors and other tools. In one sense LISP is an “assembly language for an AI machine” and Prolog is “assembly language for a logic machine.” Both Prolog and LISP are very powerful languages. I find it takes deep mental effort to write code in both. But code does wonderful things!
Where and what are they (physically in a real computer’s memory)?
To what extent are they controlled by the OS or language run-time?
What is their scope?
What determines the size of each of them?
What makes one faster?
The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.
The heap is memory set aside for dynamic allocation. Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
To answer your questions directly:
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
What is their scope?
The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.
What determines the size of each of them?
The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.
Variables created on the stack will go out of scope and are automatically deallocated.
Much faster to allocate in comparison to variables on the heap.
Implemented with an actual stack data structure.
Stores local data, return addresses, used for parameter passing.
Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
Data created on the stack can be used without pointers.
You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
Usually has a maximum size already determined when your program starts.
Heap:
Stored in computer RAM just like the stack.
In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with delete, delete[], or free.
Slower to allocate in comparison to variables on the stack.
Used on demand to allocate a block of data for use by the program.
Can have fragmentation when there are a lot of allocations and deallocations.
In C++ or C, data created on the heap will be pointed to by pointers and allocated with new or malloc respectively.
Can have allocation failures if too big of a buffer is requested to be allocated.
You would use the heap if you don’t know exactly how much data you will need at run time or if you need to allocate a lot of data.
Responsible for memory leaks.
Example:
intfoo(){
char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).bool b = true; // Allocated on the stack.if(b)
{
//Create 500 bytes on the stackchar buffer[500];
//Create 500 bytes on the heap
pBuffer = newchar[500];
}//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;
he most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.
In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).
The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.
In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear ‘top’ item.
Heap allocation requires maintaining a full record of what memory is allocated and what isn’t, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting – identifying at runtime when memory is no longer in scope and deallocating it.
These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!
To what extent are they controlled by the OS or language runtime?
As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn’t really have runtime control over it; it’s determined by the programming language, OS and even the system architecture.
A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.
What is their scope?
The call stack is such a low level concept that it doesn’t relate to ‘scope’ in the sense of programming. If you disassemble some code you’ll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you’d expect it to work given how your programming languages work. In a heap, it’s also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a “scope” is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).
What determines the size of each of them?
Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don’t store huge chunks of data on the stack, so it’ll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, “stack overflow”) or other unusual programming decisions.
A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don’t normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn’t use memory that you haven’t allocated yet or memory that you have freed.
What makes one faster?
The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What’s more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.
Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.
The heap
The heap contains a linked list of used and free blocks. New allocations on the heap (by new or malloc) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block.
As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.
The stack
The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don’t be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.
Can a function be allocated on the heap instead of a stack?
No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.
How the heap is managed is really up to the runtime environment. C uses malloc and C++ uses new, but many other languages have garbage collection.
However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn’t too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.
In the following C# code
publicvoidMethod1()
{
int i = 4;
int y = 2;
class1 cls1 = new class1();
}
Here’s how the memory is managed
Local Variables that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don’t really know up front but we expect them to last a while. In most languages it’s critical that we know at compile time how large a variable is if we want to store it on the stack.
Objects (which vary in size as we update them) go on the heap because we don’t know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.
In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you’re not dealing with pointers.
The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.
Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).
The Heap The heap is a generic name for where you put the data that you create on the fly. If you don’t know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.
Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are – memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).
Implementation Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.
This is only practical if your memory usage is quite different from the norm – i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.
Physical location in memory This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implementation specific) and frankly not important.
In Short
A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer’s RAM.
In Detail
The Stack
The stack is a “LIFO” (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is “pushed” onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.
The advantage of using the stack to store variables, is that memory is managed for you. You don’t have to allocate memory by hand, or free it once you don’t need it any more. What’s more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.
The heap is a region of your computer’s memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don’t need it any more.
If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won’t be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.
Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.
Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.
Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.
Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.
You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.
In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
“You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.”
The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.
Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.
How about implementing something like SO does with the CAPTCHAs?
If you’re using the site normally, you’ll probably never see one. If you happen to reload the same page too often, post successive comments too quickly, or something else that triggers an alarm, make them prove they’re human. In your case, this would probably be constant reloads of the same page, following every link on a page quickly, or filling in an order form too fast to be human.
If they fail the check x times in a row (say, 2 or 3), give that IP a timeout or other such measure. Then at the end of the timeout, dump them back to the check again.
Since you have unregistered users accessing the site, you do have only IPs to go on. You can issue sessions to each browser and track that way if you wish. And, of course, throw up a human-check if too many sessions are being (re-)created in succession (in case a bot keeps deleting the cookie).
As far as catching too many innocents, you can put up a disclaimer on the human-check page: “This page may also appear if too many anonymous users are viewing our site from the same location. We encourage you to register or login to avoid this.” (Adjust the wording appropriately.)
Besides, what are the odds that X people are loading the same page(s) at the same time from one IP? If they’re high, maybe you need a different trigger mechanism for your bot alarm.
Edit: Another option is if they fail too many times, and you’re confident about the product’s demand, to block them and make them personally CALL you to remove the block.
Having people call does seem like an asinine measure, but it makes sure there’s a human somewhere behind the computer. The key is to have the block only be in place for a condition which should almost never happen unless it’s a bot (e.g. fail the check multiple times in a row). Then it FORCES human interaction – to pick up the phone.
In response to the comment of having them call me, there’s obviously that tradeoff here. Are you worried enough about ensuring your users are human to accept a couple phone calls when they go on sale? If I were so concerned about a product getting to human users, I’d have to make this decision, perhaps sacrificing a (small) bit of my time in the process.
Since it seems like you’re determined to not let bots get the upper hand/slam your site, I believe the phone may be a good option. Since I don’t make a profit off your product, I have no interest in receiving these calls. Were you to share some of that profit, however, I may become interested. As this is your product, you have to decide how much you care and implement accordingly.
The other ways of releasing the block just aren’t as effective: a timeout (but they’d get to slam your site again after, rinse-repeat), a long timeout (if it was really a human trying to buy your product, they’d be SOL and punished for failing the check), email (easily done by bots), fax (same), or snail mail (takes too long).
You could, of course, instead have the timeout period increase per IP for each time they get a timeout. Just make sure you’re not punishing true humans inadvertently.
Is Assembly faster than C++?
The unsatisfying answer: Nearly every C++ compiler can output assembly language,* so assembly language can be exactly the same speed as C++ if you use C++ to develop the assembly code.
The more interesting answer: It’s highly unlikely that an application written entirely in assembly language remains faster than the same application written in C++ over the long run, even in the unlikely case it starts out faster.
Repeat after me: Assembly Language Isn’t Magic™.
For the nitty gritty details, I’ll just point you to some previous answers I’ve written, as well as some related questions, and at the end, an excellent answer from Christopher Clark:
Performance optimization strategies as a last resort
Let’s assume:
the code already is working correctly
the algorithms chosen are already optimal for the circumstances of the problem
the code has been measured, and the offending routines have been isolated
all attempts to optimize will also be measured to ensure they do not make matters worse
OK, you’re defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:
The first problem found was use of list clusters (now called “iterators” and “container classes”) accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds.
Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds.
Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.
Now I seem to have hit a wall. The samples are telling me exactly what it is doing, but I can’t seem to find anything that I can improve. Then I reflect on the basic design of the program, on its transaction-driven structure, and ask if all the list-searching that it is doing is actually mandated by the requirements of the problem.
Then I hit upon a re-design, where the program code is actually generated (via preprocessor macros) from a smaller set of source, and in which the program is not constantly figuring out things that the programmer knows are fairly predictable. In other words, don’t “interpret” the sequence of things to do, “compile” it.
That redesign is done, shrinking the source code by a factor of 4, and the time is reduced to 10 seconds.
Now, because it’s getting so quick, it’s hard to sample, so I give it 10 times as much work to do, but the following times are based on the original workload.
More diagnosis reveals that it is spending time in queue-management. In-lining these reduces the time to 7 seconds.
Now a big time-taker is the diagnostic printing I had been doing. Flush that – 4 seconds.
Now the biggest time-takers are calls to malloc and free. Recycle objects – 2.6 seconds.
Continuing to sample, I still find operations that are not strictly necessary – 1.1 seconds.
Total speedup factor: 43.6
Now no two programs are alike, but in non-toy software I’ve always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i or i++ or for(;;) or while(1) are faster: the kinds of questions I see so often on Stack Overflow.
P.S. It may be wondered why I didn’t use a profiler. The answer is that almost every one of these “problems” was a function call site, which stack samples pinpoint. Profilers, even today, are just barely coming around to the idea that statements and call instructions are more important to locate, and easier to fix, than whole functions.
I actually built a profiler to do this, but for a real down-and-dirty intimacy with what the code is doing, there’s no substitute for getting your fingers right in it. It is not an issue that the number of samples is small, because none of the problems being found are so tiny that they are easily missed.
ADDED: jerryjvl requested some examples. Here is the first problem. It consists of a small number of separate lines of code, together taking over half the time:
/* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)
These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with “information hiding” meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a “bottleneck” (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they “should have been avoided” (in hindsight). I hope that gives a bit of the flavor.
Here is the second problem, in two separate lines:
/* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)
These are building lists by appending items to their ends. (The fix was to collect the items in arrays, and build the lists all at once.) The interesting thing is that these statements only cost (i.e. were on the call stack) 3/48 of the original time, so they were not in fact a big problem at the beginning. However, after removing the first problem, they cost 3/20 of the time and so were now a “bigger fish”. In general, that’s how it goes.
I might add that this project was distilled from a real project I helped on. In that project, the performance problems were far more dramatic (as were the speedups), such as calling a database-access routine within an inner loop to see if a task was finished.
REFERENCE ADDED: The source code, both original and redesigned, can be found in www.ddj.com, for 1993, in file 9311.zip, files slug.asc and slug.zip.
EDIT 2011/11/26: There is now a SourceForge project containing source code in Visual C++ and a blow-by-blow description of how it was tuned. It only goes through the first half of the scenario described above, and it doesn’t follow exactly the same sequence, but still gets a 2-3 order of magnitude speedup.
Suggestions:
Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead. Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory.
Don’t use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it. Down-sides: writing additional code means more surface area for bugs.
Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!)
Cheat: in some cases although an exact calculation may exist for your problem, you may not need ‘exact’, sometimes an approximation may be ‘good enough’ and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%? Down-sides: Well… the answer won’t be exact.
When you can’t improve the performance any more – see if you can improve the perceived performance instead.
You may not be able to make your fooCalc algorithm faster, but often there are ways to make your application seem more responsive to the user.
A few examples:
anticipating what the user is going to request and start working on that before then
displaying results as they come in, instead of all at once at the end
Accurate progress meter
These won’t make your program faster, but it might make your users happier with the speed you have.
I spend most of my life in just this place. The broad strokes are to run your profiler and get it to record:
Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use __restrict liberally to promise the compiler about aliasing.
Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
Sequential floating-point ops. Make these SIMD.
And one more thing I like to do:
Set your compiler to output assembly listings and look at what it emits for the hotspot functions in your code. All those clever optimizations that “a good compiler should be able to do for you automatically”? Chances are your actual compiler doesn’t do them. I’ve seen GCC emit truly WTF code.
More suggestions:
Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need.
Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking).
Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done.
Threaded I/O: For those daring enough, combine ‘I/O up-front’ or ‘Delay I/O’ with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.
graph algorithms in particular the Bellman Ford Algorithm
Scheduling algorithms the Round-Robin scheduling algorithm in particular.
Dynamic Programming algorithms the Knapsack fractional algorithm in particular.
Backtracking algorithms the 8-Queens algorithm in particular.
Greedy algorithms the Knapsack 0/1 algorithm in particular.
We use all these algorithms in our daily life in various forms at various places.
For example every shopkeeper applies anyone or more of the several scheduling algorithms to service his customers. Depending upon his service policy and situation. No one of the scheduling algorithm fits all the situations.
All of us mentally apply one of the graph algorithms when we plan the shortest route to be taken when we go out for doing multiple things in one trip.
All of us apply one of the Greedy algorithms while selecting career, job, girlfriends, friends etc.
All of us apply one of the Dynamic programming algorithms when we do simple multiplication mentally by referring to the various mathematical products table in our memory.
It uses TimSort, a sort algorithm which was invented by Tim Peters, and is now used in other languages such as Java.
TimSort is a complex algorithm which uses the best of many other algorithms, and has the advantage of being stable – in others words if two elements A & B are in the order A then B before the sort algorithm and those elements test equal during the sort, then the algorithm Guarantees that the result will maintain that A then B ordering.
That does mean for example if you want to say order a set of student scores by score and then name (so equal scores are ordered already alphabetically) then you can sort by name and then sort by score.
TimSort has good performance against data sets which are partially sorted or already sorted (areas where some other algorithms struggle).
Timsort – Wikipedia
Timsort was designed to take advantage of runs of consecutive ordered elements that already exist in most real-world data, natural runs . It iterates over the data collecting elements into runs and simultaneously putting those runs in a stack. Whenever the runs on the top of the stack match a merge criterion , they are merged. This goes on until all data is traversed; then, all runs are merged two at a time and only one sorted run remains.
https://en.m.wikipedia.org/wiki/Timsort
Run Your Python Code Online Here
I’m currently coding a SAT solver algorithm that will have to take millions of input data, and I was wondering if I should switch from Python to C.
Answer: Using best-of-class equivalent algorithms optimized compiled C code is often multiple orders of magnitude faster than Python code interpreted by CPython (the main Python implementation). Other Python implementations (like PyPy) might be a bit better, but not vastly so. Some computations fit Python better, but I have a feeling that a SAT solver implementation will not be competitive if written using Python.
All that said, do you need to write a new implementation? Could you use one of the excellent ones out there? CDCL implementations often do a good job, and there are various open-source ones readily available (e.g., this one: https://github.com/togatoga/togasat
Comments:
1- I mean, also it depends. I recall seeing an analysis some time ago, that showed CPython can be as fast as C … provided you are almost exclusively using library functions written in C. That being said, for any non-trivial python program it will probably be the case that you must spend quite a bit of time in the interpreter, and not in C library functions.
There are two main reasons: performance and familiarity. While Rust has been shown to be faster than C++, it’s not as fast as assembly language—and many developers have been working in assembly for so long that they’re not willing to give it up.
However, there’s another reason why some developers are sticking with C++: compiler optimization.
C++ compilers are more intelligent than Rust compilers when it comes to optimizing code for performance, so if you’re looking for top-notch performance from your application, then you might want to stick with C++ until the Rust compiler has caught up.
The C++ programming language definition is written in English and in other human languages. Programming language definitions are written for humans to read. They are not written in programming languages.
An actual implementation of a C++ compiler (or interpreter) can be written in any general-purpose programming language. Some are written in C, some are written in C++, some are written in other programming languages. Some are written with the help of compiler development tools and infrastructure (e.g., lex, yacc, flex, bison, antlr, LLVM, etc.). It just depends on the specific C++ implementation you’re looking at.
This is true of all high-level programming languages. Any general-purpose programming language can be used to implement a compiler or interpreter, no matter what programming language you are compiling or interpreting.
Learn other languages. It will broaden your perspective and hopefully make you a better developer.
Alan Perlis, one of the developers of ALGOL, once said, “A language that doesn’t affect the way you think about programming, is not worth knowing.”
Conversely, that implies learning other languages can and will affect the way you think about programming, provided you get some variety of exposure.
C++ is a multiparadigm language. But if you haven’t had exposure to those paradigms in a more focused setting, you might not understand the value they bring, or their strengths, weaknesses, idioms, and insights.
So even if you do the bulk of your programming in C++, you may not be using it the most effective way possible.
I know I personally have gaps, because I haven’t explored certain paradigms myself. I owe it to myself to at least dip my toe in some of them. I know this, because every time I learn a new language or environment, I sense a gap closing—a gap I may not have been aware of previously.
You don’t even need to spend a lot of time to gain value, either. I may have only spent a week with Scala, for example, but I learned more than just the base language from it. I hadn’t really encountered fold and match expressions as such basic and integral concepts, for example.
And despite its negative reputation, I found Perl to be an excellent language to learn about multiple programming techniques.
Mark Jason Dominus’ Higher Order Perl opened my eyes a number of techniques that I believe originated more from the LISP world.
Example: Partial Function Application
In Perl, you can implement partial function application (sometimes conflated with the related concept currying) with you eyes closed and one hand behind your back. Suppose I want to bind the first argument of foo():
my $f = sub { return foo(arg1, @_); };
Now I can invoke $f as a function with that first argument bound, with a slight syntax tweak: &$f(…) or $f->(…). I don’t even need to think about the rest.
Trying to learn about that for the first time in C++ likely would have lost the forest for the trees.
C++98 was quite primitive. It offered std::bind1st and std::bind2nd for 2-argument function objects only. Boost offered boost::bind,[1] which had its own limitations. And because these were relatively uncommon, they were unfamiliar to many C++ users (at least among the crowd I was in). C++ lambdas (introduced with C++11) help, but they don’t work for arbitrary arguments until C++14. For that, you probably need parameter packs,forwarding references, and std::forward.[2] And then there’s object lifetimes to consider, so for your bound arguments you might need to trade off between copy, move, capturing a reference, smart pointers,[3][4] etc. Oh, and finally, it won’t yield a function pointer, but rather an function object, so it’s not usable in places that need a pure function pointer. Although, if it manages to be capture-less, it can provide a pure function pointer by applying unary + to it…
Can you see how you might lose the forest for the trees here?
If you didn’t already have some idea of the usefulness of partial application, would you even try? If you hadn’t encountered the concept before, would it have even come to mind when you saw lambdas?
Punchline
In practice, if you’re already well versed in C++, it’s not actually all that difficult to implement techniques like partial application in C++. You’re already accustomed to the rigamarole described above, since C++ confronts you with those sorts of decisions regularly.
It does cloud things noticeably, however. Learning the concepts in a simpler environment separates you from the implementation noise.
Learn other languages and become a more rounded and hopefully better developer. Step away from C++’s innumerable trees of details to see different areas of the forest more clearly.
In C++, how can a template object be deleted with or without the delete keyword? (template <class T> class Obj;)
If you allocated it with new, then delete it by passing the pointer to delete, just like any other pointer. There’s nothing particularly special about a pointer to an object whose type happens to be a template.
Most of the time, though, you shouldn’t be calling new and delete directly.
Is there a way to prevent objects in a class to collect garbage in Java? If no, why?
Can you prevent objects being garbage collected? Yes. Retain a reference to them for the lifetime of the program. That will defer garbage collection until the program ends.
Is there a way to remove garbage collection? No.
Why?
Language design choice to simplify memory management
Typically garbage collection happens without issues
If your app struggles with garbage collection, that may point to a design revision being necessary, or maybe Java is not the right fit. I’ve not experienced that to date though.
Garbage collection is very mainstream now, being used in JavaScript, Typescript, Python, Kotlin, C#, Go, Swift, Lisp, Smalltalk, Clojure, Haskell. This is why I do not understand the issues with noobs and GC: it is bloody everywhere. In all your favourite “best languages”.
The only languages I know without garbage collection are C, C++ and Rust. Oh and Pascal, but that’s not mainstream at present.
So if your app is truly “the one” that cannot be solved with GC, then you’re probably learning C++, Rust or C. Rust is the most modern of these and the one I would recommend. I would probably use C++ and as I have some background in that. By Alan Mellor
When should “new” be used in C++?
new’s use should be confined to very narrow use-cases. Examples of use cases where new is ok:
Writing low-level memory management code such as allocators and deallocators, smart pointers, etc.
Working with code/libraries that uses outdated C++ programming idioms like QT — but then narrowly limited to the extent necessary to work with QT
You are going to need to preallocate an object to pass to an API that indicates it will assume ownership of (i.e. responsibility for deleting the object and delete). If you are going to work with that object at all before passing it off, you should not use new. (use a unique pointer and call .release(), when calling the API).
The way to dynamically allocate memory correctly in modern C++ is std::make_unique or std::make_shared. The first returns a unique_pointer to the allocated object (which will delete the object for you when it goes out of scope) or std::shared_pointer which can be copied around — the object will be deleted for you when there are no more copies of the shared pointer.
For most programming work, you don’t need and shouldn’t use “new” or even worse “malloc”.
Originally Answered: Why do array indices start with 0 (zero) in many programming languages?
Array indices should start at 0. This is not just an efficiency hack for ancient computers, or a reflection of the underlying memory model, or some other kind of historical accident—forget all of that. Zero-based indexing actually simplifies array-related math for the programmer, and simpler math leads to fewer bugs. Here are some examples.
Suppose you’re writing a hash table that maps each integer key to one of n buckets. If your array of buckets is indexed starting at 0, you can write bucket = key mod n; but if it’s indexed starting at 1, you have to write bucket = (key mod n) + 1.
Suppose you’re writing code to serialize a rectangular array of pixels, with width w and height h, to a file (which we’ll think of as a one-dimensional array of length w*h). With 0-indexed arrays, pixel (x, y) goes into position y*w + x; with 1-indexed arrays, pixel (x, y) goes into position y*w + x - w.
Suppose you want to put the letters ‘A’ through ‘Z’ into an array of length 26, and you have a function ord that maps a character to its ASCII value. With 0-indexed arrays, the character c is put at index ord(c) - ord(‘A’); with 1-indexed arrays, it’s put at index ord(c) - ord(‘A’) + 1.
It’s in fact one-based indexing that’s the historical accident—human languages needed numbers for “first”, “second”, etc. before we had invented zero. For a practical example of the kinds of problems this accident leads to, consider how the 1800s—well, no, actually, the period from January 1, 1801 through December 31, 1900—came to be known as the “19th century”.
Originally Answered: Knowing that Python is very slow compared to Java and C++, why do they mostly use Python for fast algorithmic procedures like machine learning?
No, almost no one uses Python libraries for machine learning.
Before you start listing counterexamples, notice the emphasized words. Yes, a lot of people use Python for machine learning, because it allows for very fast prototyping and overall exploration of problem space, but none of the libraries they are using for it are actually written in Python. Indeed, they are almost always written in either Fortran or C++ instead, and just interface with Python through some thin wrapper.
The slowness of Python is completely irrelevant if the only thing you do with it is invoking a library function written in highly-optimized C++.
Many companies have bet their stack on Java, so there’s demand for Java programmers.
The JVM is cross-platform, and uses run-time information to manage itself.
It takes care of memory management.
Java 8 has lambda expressions, and includes an impl of Javascript called Nashorn that runs on the JVM.
Static typing: Java is typesafe, and its static typing is essentially a form of self-documenting code.
Java is mature: It’s been around for 20 years, it’s fully backward compatible, and code written decades ago still works.
Android: Java 7 works on the world’s largest mobile OS.
For those and other reasons, Java is one of the world’s most widely used languages. Oracle says there are 10 million Java programmers worldwide. The Github stats from Eduardo Bonet speak volumes.
What important Java programming questions are asked during interviews?
What have you built using Java?
How did you design that thing? What were the key principles you followed?
How did you test that thing?
Java is an object oriented language. What design principles have you found helpful?
What does clean code mean to you?
How do you add new features while keeping existing code working in the CI pipeline?
These are the kinds of things I’m interested about. Many of them get covered as we work together on a simple programming kata.
It all boils down to ‘can you use Java and its tools to work alongside us on our team’. By Alan Mellor
If you understand loops, variables and conditionals only, that’s enough to hack out a FizzBuzz. If you’re a bit further along the path, you can write a cleaner FizzBuzz
The challenge itself is about writing fizz and buzz when a number is exactly divisible by 3 or 5. It’s not really important, except that it steers you to use those elements of programming above.
It can be done in any language as those concepts are foundational to every language.
Let me open with a quote that you’ve probably seen many times:
premature optimization is the root of all evil. — Donald Knuth
Programs are regularly gigantic. If you profile a program that isn’t fast enough, you’ll often find histograms that shows the top 1,000 functions all taking well under 0.1% of the execution time. “Optimizing” those 1,000 functions is usually not practical and would likely not achieve the desired speedup anyway.
The number of executed instructions is often relatively irrelevant. Instead, the number of cache misses is far more critical, but it’s also much harder to locate them. Avoiding cache misses is something that may require design work up front, because it affects core data structures.
Machines are highly heterogeneous, and extracting performance is not just a matter of dealing with the main CPU cores (which may not be homogeneous!), but also to arrange for efficient use of vector units, and accelerators like GPUs, media co-processors, and neural engines. Utilizing all those units is also something that may require design work up front.
Performance is not just a matter of execution time. It’s also a matter of energy consumption and scalability. And response time: More than ever, software is interactive, and yet has to deal with new kinds of latencies (e.g., from networking).
Software is an independent industry: If your version 1.0 is too slow, or uses to much battery life, or chokes your data center, you might not get a chance at developing an optimized version 1.5. (In 1974, software was mostly an add-on to hardware.)
Software is built from independent components: While developing a specific component, you might not know just how hard it will be pushed. If you don’t design for performance from the start you may end up painting yourself into a corner.
All that to say that Knuth’ quote should be taken for what it is: Don’t optimize local instruction counts early on. But don’t skip thinking about optimizing design and data structures from the start, because if performance matters in any way (throughput, latency, energy use, or scalability) it’s something that’s difficult or impossible to “retrofit”. Things to think about:
How will you evaluate performance? How will you track it during the development and maintenance process?
How can you avoid computation that’s not needed? This might mean to architect for “lazy evaluation”.
How will you lay out your data for efficient access (i.e., make best use of the memory hierarchy)?
How will you organize your algorithms and data structures to take advantage of the available computational resources?
When considering algorithms, what regime will they work in? A traditional example: “Fast” sorting algorithms are typically only preferable once there are enough elements (often 50+) to sort; if you know that you’ll be repeatedly sorting a dozen elements, those algorithms may not be your best option.
Are the complexities introduced to achieve better performance worth their overall (negative?) impact on the project?
When all that is handled adequately, you might eventually have to deal with “nitty gritty code optimization”, and it will have a chance to be meaningful.
Now, regarding the original question:
What do most programmers do (when optimizing code) that is essentially wrong?
I don’t think that’s generalizable. I think Knuth’s quote is often mis-construed… but I wouldn’t say that “most programmers” do that. I’m not even sure that “most programmers” optimize code at all. I also think that Knuth’s quote is often ignored, and that’s not great either… but again, I’d venture that it doesn’t involve “most programmers”. Programmers are a very diverse bunch, with many diverse roles, working on a great diversity of projects that may or may not have concrete performance constraints.
In other words, I think the question has no meaningful answer.
Finally, I’d like to close with a quote from the late Len Lattanzi (whom I had the pleasure of having as a colleague for a few years):
Belated pessimization is the leaf of no good. — Len Lattanzi
Umm, that’s really up to you. But there are some tradeoffs.
Java:
Pros:
Extremely widely used. You’ll never want for a job if you are good at it. Other languages (Scala, Kotlin, Groovy) run on the JVM as well. There is a lot of cool big data processing that you can use Java for (Apache Spark, Hadoop, etc.).
Cons:
Tons of bloatware (WebSphere, WebLogic, Adobe Experience Manager) runs on Java. You’re likely to end up coding up some legacy enterprise garbage. UIs written in Java are crap at best.
.NET:
Pros:
Well supported by Microsoft. Visual Studio is gorgeous.
Cons:
Not so many open source libraries, you’ll likely be coding for Windows. This means that your development machine will be Windows (dealbreaker for me). Also, no cool little startup will use .NET ever. Not as many jobs as Java. UIs written in .NET are crap at best.
Node.js
Pros:
Much more concise and faster to develop for than either .NET or Java. Almost as many open source libraries as there are for Java.
Cons:
Memory management, thread management, and overall performance aren’t as good as Java or .NET. You’ll have a harder time finding a Node.js job unless you also know a client side JS framework such as Vue.js or React.js. In that case, you’ll be very much in demand.
Others:
If you want to stick to server side coding, you should consider RUST and Golang. Both are more performant than any of the above. Benchmarks I’ve read suggest that RUST is overall more performant but that Golang has better concurrency management.
pass in two objects as collaborating parameters so methods can be called on them
The second way is good in OO. You do your calculation once, store the two results as state in an object, use two separate accessors in the calling code.
Why do some programmers not use “using namespace std” in C++?
Yup, what they said. When you say using namespace std, you potentially import hundreds or thousands of symbols into your code. They are symbols with short names like sort, find and get, that you might want to use for your own code. The actual number of symbols imported into your code depends on what headers you define, so your program might work today, and tomorrow when you add a new header, it might break. The list of symbols in namespace std is subject to change (that’s why it’s in a namespace), so your program might work under C++03, and break when you tried to compile it under C++11.
You can avoid all these hassles by using fully qualified names; std::cout instead of cout, std::sort() instead of sort(), etc. The more experienced the programmer, the more likely they are to do this.
You can also limit the scope of your namespace pollution by putting your using directive at function scope. By Kurt Guntheroth
Do all pointers have the same size in C++?
Theoretically, no. Not even for a given system. A char* may have a size different from an int*.
In practice, yes.
First, note that all pointers to object types (as opposed to function types) must be able to round-trip through void* (modulo cv-qualification). So if different object pointer types had different sizes, void* would have to be as large as the largest of them.
Second, for pointers to object types there aren’t many potential advantages to having them be of different size. Why make things complex if they can be made simple at no perceivable cost?
Third… plenty of reasonable code “out there” assumes that all pointers have the same size. So building an implementation where that’s not the case handicaps that implementation right out of the gate.
For function pointers it may actually sometimes be interesting from a performance point of view to give them twice the size of ordinary pointers, because they may have to encapsulate both the address of the function and the address of the associated data segment (in shared library models where a separate data segment is created for every shared library instance). However, because of compatibility considerations even those implementations just add an indirection to keep the function pointers compatible with void* (even though function pointers are not strictly required by the standard to round-trip through void*).
On the scale of bad programming, if is at the bottom of the list.
Compilers are very smart about these things. as an example, consider the alternatives
Both compile to the exact same code sequence, which does NOT include a branch:
Source code is supposed to be a way to express your intent to the computer. You really should write the source to be as clear as possible and leave the microoptimizations to the compiler. Once you get the program working and correct, then you can look at performance. Use profiling tools to figure out where the time is going and speed up the parts that are slow AND where being slow actually matters.
By the way, you shouldn’t be afraid of branches either. The branch prediction logic in modern processors is nearly telepathic. AMD is using neural nets inside the chip (!). The predictors will correctly guess what is going to happen more than 90% of the time.
The other answers are mistaken. This is a very common confusion. They describe statically typed language, not strongly typed language. There is a big difference.
Strongly typed vs weakly typed:
In strongly typed languages you get an error if the types do not match in an expression. It does not matter if the type is determined at compile time (static types) or runtime (dynamic types).
Both java and python are strongly typed. In both languages, you get an error if you try to add objects with unmatching types. For example, in python, you get an error if you try to add a number and a string:
>>> a = 10
>>> b = “hello”
>>> a + b
Traceback (most recent call last):
File “<stdin>”, line 1, in <module>
TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’
In Python, you get this error at runtime. In Java, you would get a similar error at compile time. Most statically typed languages are also strongly typed.
The opposite of strongly typed language is weakly typed. In a weakly typed language, there are implicit type conversions. Instead of giving you an error, it will convert one of the values automatically and produce a result, even if such conversion loses data. This often leads to unexpected and unpredictable behavior.
Javascript is an example of a weakly typed language.
> let a = 10
> let b = “hello”
> a + b
’10hello’
Instead of an error, JavaScript will convert a to string and then concatenate the strings.
Static types vs dynamic types:
In a statically typed language, variables are bound types and may only hold data of that type. Typically you declare variables and specify the type of data that the variable has. In some languages, the type can be deduced from what you assign to it, but it still holds that the variable is bound to that type. For example, in java:
int a = 3;
a = “hello” // Error, a can only contain integers
in a dynamically typed language, variables may hold any type of data. The type of the data is simply determined by what gets assigned to the variable at runtime. Python is dynamically typed, for example:
a = 10
a = “hello”
# no problem, a first held an integer and then a string
Comments:
#1: Don’t confuse strongly typed with statically typed.
Python is dynamically typed and strongly typed. Javascript is dynamically typed and weakly typed. Java is statically typed and strongly typed. C is statically typed and weekly typed.
I also added a drawing that illustrates how strong and static typing relate to each other:
Python is dynamically typed because types are determined at runtime. The opposite of dynamically typed is statically typed (not strongly typed)
Python is strongly typed because it will give errors when types don’t match instead of performing implicit conversion. The opposite of strongly typed is weakly typed
Finalize() is not guaranteed to be called and the programmer has no control over what time or in what order finalizers are called.
They are useless and should be ignored.
A destructor is not part of Java. It is a C++ language feature with very precise definitions of when it will be called.
Comments:
1- Until we got to languages like Rust (with the Drop trait) and a few others was C++ the only language which had the destructor as a concept? I feel like other languages were inspired from that.
2- Many others manage memory for you, even predating C: COBOL, FORTRAN and so on. That’s another driver why there isn’t much attention to destructors
Mainly getting out of that procedural ‘function operates on parameters passed in’ mindset.
Tactically, the static can normally be moved onto one of the parameter objects. Or all the parameters become an object that the static moves to. A new object might be needed. Once done the static is now a fully fledged method on an object and is not static anymore.
I view this as a positive iterative step in discovering objects for a system.
For cases where a static makes sense (? none come to mind) then a good practice is to move it closer to where it is used either in the same package or on a class that is strongly related.
I avoid having global ‘Utils’ classes full of statics that are unrelated. That’s fairly basic design, keeping unrelated things separate. In this case, the SOLID ISP principle applies: segregate into smaller, more focused interfaces.
Not really. I use Python occasionally for “quick hacks” – programs that I’ll probably run once and then delete – also, because I use “blender” for 3D modeling and Python is it’s scripting language.
I used to write quite a bit of JavaScript for web programming but since WASM came along and allows me to run C++ at very nearly full speed inside a web browser, I write almost zero JavaScript these days.
I use C++ for almost everything.
Once you get to know C++ it’s no harder than Python – the main thing I find great about Python is the number of easy-to-find libraries.
But in AAA games – the poor performance of Python pretty much rules it out.
In embedded systems, the computer is generally too small to fit a Python interpreter into memory – so C or C++ is a more likely choice.
JavaScript is a scripting language, that was developed by EMCA’s Technical Committee and Brendan Eich. It works perfectly in web-browsers without the help of any web-server or a compiler. It allows you to change HTML and CSS in the browsers without a full page reload. That is why it is used to create dynamic and interactive web pages.
TypeScript is a superset of the JavaScript language. It was presented and developed by Microsoft technical fellow Anders Hejlsberg in 2012. Typescript has appeared for a certain reason: the more JavaScript grew, the heavier and more unreadable js code became. It turned up especially evident when developers started to use JavaScript for server-side technologies.
TypeScript is an open-source language that has a compiler, that converts TypeScript code to JavaScript code (see TypeScript playground service). That compiler is cross-browser and also open-source. To start using TypeScript, you can rename your .js files to .ts files, and if there are no logical mistakes in the js code, you get valid TypeScript code. So, TypeScript code Is JavaScript code (and vice versa) just with some additions. To learn more about those additions, watch the original video presentation of TypeScript. Meanwhile, we discuss the key differences between JS and TS in 2022.
I think TypeScript *is* pretty popular, within the constraints it has.
Node.js is 1.8% of websites, and TypeScript is seldom used outside of Node.js. That really means TypeScript has limited potential for use there.
You can use TypeScript on the client-side, but it can be a pain to set up, and unless you have quite a lot of client-side logic, it might not be worth it.
Personally, I think TypeScript on the client-side is well worth the effort, but not really worth it on the server side, where there are so many options outside of a JS runtime.
I don’t think anybody says JavaScript is a dead language. I think its long term future is pretty bleak though, for two reasons:
TypeScript.
WebAssembly.
The entire Internet doesn’t run on JavaScript, in fact hardly any of it does, what you mean is the *web*. The web and the Internet are two different things, and while JavaScript is of course ubiquitous in web sites, practically no Internet infrastructure is using JavaScript.
If you consider the Internet to be the road infrastructure and cars, the web is the screaming babies in the back seats.
Unless you can write really good TypeScript code, you’re probably better off sticking to JavaScript – if you have that option of course.
The main advantage of JS vs TS in an interview is that equivalent code will be much quicker to write with JS, as you don’t have to write type annotations and what not. The time that you have to spend mechanically writing code is not negligible and time is off the essence.
Then again, the better you are at TypeScript, the less this will make a difference. Also, in TypeScript there are more ways to write functionally equivalent code, so when you’re really great at TS you’re more susceptible to pick the very best way to express what you want to do, so your expertise and good coding style is more evident. Finally, with good TS you should be able to avoid writing some tests that may be necessary in JS, and your coding style is naturally more defensive, which is good.
Originally Answered: If you build a huge website like eBay, Amazon, Facebook today which technology stack and language would you choose: Java/springboot, c#/.netcore, PHP, python, typescript/reactjs/Node.js (backendMySQL,Linux and frontend JavaScript is mostly fixed)?
Of those, TypeScript/Node.js/React is an easy answer. Though I’d also strongly recommend TypeScript on the frontend as well. If you skip Redux and instead use React Hooks you should find that TypeScript is a good fit.
But I wouldn’t use MySQL. PostgreSQL is stronger on almost every axis at this point, and given the lack of specificity of the purpose of the web site, I wouldn’t even necessarily recommend PostgreSQL over a half dozen other types of database.
Listen, if you want to design a web site such that it can grow, you need to make key technology choices strategically. If you’re using PostgreSQL, you can nearly seamlessly switch to CockroachDB, for instance, for much easier distributed database performance. Unless your database needs support for Geo-indexing, in which case you might need to split data between CockroachDB and MongoDB (edit: CockroachDB added Geo-index support!). Or if your website would benefit from a graph database, maybe OrientDB would be best.
Designing a website architecture is something that should be done by experienced experts. And the design goes deeper than just the technology choices. You need an architect who knows how to coordinate the architecture and the data flow your specific app will require. Otherwise you could paint yourself into a corner and end up with a site that’s failing at load with no easy path to fixing it, just at the point when your users are asking for more features.
A common cop-out inspired by the agile community is to claim that you just “ignore” the design and optimize later, but the truth is that many services that rely on that approach simply fail when they start to get traction.
Ironically, given your list of companies to be like, Facebook largely succeeded because a previous successful competitor, Friendster, couldn’t keep up with its expansion. The architecture had too many bottlenecks for them to scale horizontally, and they started hemorrhaging users by the thousands when the users found the site to be unresponsive too often. So if you want to be a Facebook, then plan for scaling from the start; otherwise the odds are good you’ll be a Friendster instead.
Not that Facebook necessarily planned it out in advance. I suspect they were instead just lucky. But “being lucky” isn’t a business plan.
I want to code a very basic cloud storage website like Dropbox (website only) using Javascript. What do I need to know? Any frameworks, libraries, tools I need to know?
In addition to the web site code, you’ll need:
Some kind of storage. AWS S3 is the normal solution, but Google and Azure and other services offer storage as well. There are good JavaScript APIs for both.
User account storage. AWS has Cognito, which I find a bit opaque, but Google Firebase has a pretty easy to use user database. Or you can roll your own user management.
You probably want server functions. AWS Lambda or Google Firebase functions will work.
I recommend using TypeScript, because I always recommend TypeScript. But you can do all the above with JavaScript.
It’s a bit overkill for a really basic Dropbox app, but I like RedwoodJS at this point. It doesn’t really help with the online storage part, but it will make it easier to deploy your server functions to a serverless backend. By Tim Mensh
How do microservices deal with relationships between tables and transactions where every service has its own database?
Ideally, microservices should be disjoint in all aspects, neither making reference to each other or to common resources like shared databases.
So if transactions need to cross multiple microservices’ calls, or if they need to join tables, maybe you have a case for combining several would-be microservices into one.
Or maybe you have a case for sharing databases across microservices.
Or maybe you should use disjoint databases with one of various strategies for implementing distributed transactions.
Or maybe you have a case for not using microservices at all.
A major benefit of microservices is that you can develop them independently — which facilitates scaling development — and you can run them independently and often multiply or redundantly, which facilitates run-time scaling.
That means microservices can be a solution to some problems, but not all. If they add more problems then they solve, or add more complexity than they’re worth, don’t get stuck using microservices for their own sake or because they’re the latest trend.
Can’t go wrong with any of those, really. I personally don’t care too much for the Node solution, but it’s plenty capable (if you can stomach that whole JS ecosystem thing)
What is a simple C++ program to find the average of 2 numbers?
This was actually one of the interview questions I got when I applied at Google.
“Write a function that returns the average of two number.”
So I did, they way you would expect. (x+y)/2. I did it as a C++ template so it works for any kind of number.
interviewer: “What’s wrong with it?”
Well, I suppose there could be an overflow if adding the two numbers requires more than space than the numeric type can hold. So I rewrote it as (x/2) + (y/2).
interviewer: “What’s wrong with it now?”
Well, I think we are losing a little precision by pre-dividing. So I wrote it another way.
interviewer: “What’s wrong with it now?”
And that went on for about 10 minutes. It ended with us talking about the heat death of the universe.
I got the job and ended up working with the guy. He said he had never done that before. He had just wanted to see what would happen.
Comments:
1-
The big problem you get with x/2 + y/2 is that it can/will give incorrect answers for integer inputs. For example, let’s average 3 and 3. The result should obviously be 3.
But with integer division, 3/2 = 1, and 1+1 = 2.
You need to add one to the result if and only if both inputs are odd.
2- Here’s what I’d do in C++ for integers, which I believe does the right thing including getting the rounding direction correct, and it can likely be made into a template that will do the right thing as well. This is not complete code, but I believe it gets the details correct…
That will work for any signed or unsigned integer type for op1 and op2 as long as they have the same type.
If you want it to do something intelligently where one of the operands is an unsigned type and the other one is a signed type, you could do it, but you need to define exactly what should happen, and realize that it’s quite likely that for maximum arithmetic correctness, the output type may need to be different than either input type. For instance, the average of a uint32_t and an int32_t can be too large to fit in an int32_t, and it can also be too small to fit in a uint32_t, so you probably need to go with a larger signed integer type, maybe int64_t.
3- I would have answered the question with a question, “Tell me more about the input, error handling capability of your system, and is this typical of the level of challenge here at google?” Then I’d provide eye contact, sit back, and see what happens. Years ago I had an interview question that asked what classical problem was part of a pen plotter control system. I told the interviewer that it was TSP but that if you had to change pens, you had to consider how much time it took to switch. They offered me a job but I declined given the poor financial condition of the company (SGI) which I discovered by asking the interviewer questions of my own. IMO: questions are at the heart of engineering. The interviewer, if they are smart, wants to see if you are capable of discovering the true nature of their problems. The best programmers I’ve ever worked with were able to get to the heart of problems and trade off solutions. Coding is a small part of the required skills.
Can two servers have the same public IP address?
Yes, they can.
There are features in HTTP to allow many different web sites to be served on a single IP address.
You can, if you are careful, assign the same IP address to many machines (it typically can’t be their only IP address, however, as distinguishable addresses make them much easier to manage).
You can run arbitrary server tasks on your many machines with the same IP address if you have some way of sending client connections to the correct machine. Obviously that can’t be the IP address, because they’re all the same. But there are ways.
However… this needs to be carefully planned. There are many issues. Andrew Mc Gregor
What are some algorithms that computer hardware advances have made obsolete?
It depends on how you want to store and access data.
For the most part, as a general concept, old school cryptography is obsolete.
It was based on ciphers, which were based on it being mathematically “hard” to crack.
If you can throw a compute cluster at DES, even with a one byte “salt”, it’s pretty easy to crack a password database in seconds. Minutes, if your cluster is small.
Almost all computer security is base on big number theory. Today, that’s called: Law of large numbers – Wikipedia
Averages of repeated trials converge to the expected value An illustration of the law of large numbers using a particular run of rolls of a single die . As the number of rolls in this run increases, the average of the values of all the results approaches 3.5. Although each run would show a distinctive shape over a small number of throws (at the left), over a large number of rolls (to the right) the shapes would be extremely similar. In probability theory , the law of large numbers ( LLN ) is a theorem that describes the result of performing the same experiment a large number of times. According to the law, the average of the results obtained from a large number of trials should be close to the expected value and tends to become closer to the expected value as more trials are performed. [1] The LLN is important because it guarantees stable long-term results for the averages of some random events.
What it means is that it’s hard to do math on very large numbers, and so if you have a large one, the larger the better.
Most cryptography today is based on elliptic curves.
But we know by the proof of Fermat’s last theorem, and specifically, the Taniyama-Shimura conjecture, is that all elliptic curves have modular forms.
And so this gives us an attack at all modern cryptogrphay, using graphical mathematics.
It’s an interesting field, and problem space.
Not one I’m interested in solving, since I’m sure it has already been solved by my “associates” who now work for the NSA.
I am only interested in new problems.
Comments:
1- Sorry, but this is just wrong. “Almost all cryptography,” counted by number of bytes encrypted and decrypted, uses AES. AES does not use “large numbers,” elliptic curves, or anything of that sort – it’s essentially combinatorial in nature, with a lot of bit-diddling – though there is some group theory at its based. The same can be said about cryptographic checksums such as the SHA series, including the latest “sponge” constructions.
Where RSA and elliptic curves and such come in is public key cryptography. This is important in setting up connections, but for multiple reasons (performance – but also for excellent cryptographic reasons) is not use for bulk encryption. There are related algorithms like Diffie-Hellman and some signature protocols like DSS. All of these “use large numbers” in some sense, but even that’s pushing it – elliptic curve cryptography involves doing math over … points on an elliptic curve, which does lead you to do some arithmetic, but the big advantage of elliptic curves is that the numbers are way, way smaller than for, say, RSA for equivalent security.
Much research these days is on “post-quantum cryptography” – cryptography that is secure against attacks by quantum computers (assuming we ever make those work). These tend not to be based on “arithmetic” in any straightforward sense – the ones that seem to be at the forefront these days are based on computation over lattices.
Cracking a password database that uses DES is so far away from what cryptography today is about that it’s not even related. Yes, the original Unix implementations – almost 50 years ago – used that approach. So?
C++ lambda functions are syntactic sugar for a longstanding set of practices in both C and C++: passing a function as an argument to another function, and possibly connecting a little bit of state to it.
This goes way back. Look at C’s qsort():
That last argument is a function pointer to a comparison function. You could use a captureless lambda for the same purpose in modern C++.
Sometimes, you want to tack a little bit of extra state alongside the function. In C, one way to do this is to provide an additional context pointer alongside the the function pointer. The context pointer will get passed back to the function as an argument.
In C++, that context pointer can be this. When you do that, you have something called a function object. (Side note: function objects were sometimes called functors; however, functors aren’t really the same thing.)
If you overload the function call operator for a particular class, then objects of that class behave as function objects. That is, you can pretend like the object is a function by putting parentheses and an argument list after the name of an instance! When you arrive at the overloaded operator implementation, this will point at the instance.
Instances of this class will add an offset to an integer. The function call operator is operator() below.
and to use it:
That’ll print out the numbers 42, 43, 44, … 51 on separate lines.
And tying this back to the qsort() example from earlier: C++’s std::sort can take a function object for its comparison operator.
Modern C++’s lambda functions are syntactic sugar for function objects. They declare a class with an unutterable name, and then give you an instance of that class. Under the hood, the class’ constructor implements the capture, and initializes any state variables.
Other languages have similar constructs. I believe this one originated in LISP. It goes waaaay back.
As for any challenges associated with them: lifetime management. You potentially introduce a non-nested lifetime for any state associated with the callback, function object, or lambda.
If it’s all self contained (i.e. it keeps its own copies of everything), you’re less likely to have a problem. It owns all the state it relies on.
If it has non-owning pointers or references to other objects, you need to ensure the lifetime of your callback/function object/lambda remains within the lifetime of that other non-owned object. If that non-owned object’s lifetime isn’t naturally a superset of the callback/function object/lambda, you should consider taking a copy of that object, or reconsider your design.
Visual Studio Code is OK if you can’t find anything better for the language you’re using. There are better alternatives for most popular languages.
C# – Use Visual Studio Community, it’s free, and far better than Visual Studio Code.
Java – Use IntelliJ
Go – Goland.
Python – PyCharm.
C or C++ – CLion.
If you’re using a more unusual language, maybe Rust, Visual Studio Code might be a good choice.
Comments:
#1: Just chipping in here. I used to be a massive visual studio fan boy and loved my fancy gui for doing things without knowing what was actually happening. I’ve been using vscode and Linux for a few years now and am really enjoying the bare metal exposure you get with working on it (and linux) typing commands is way faster to get things done than mouse clicking through a bunch of guis. Both are good though.
#2: C# is unusual in that it’s the only language which doesn’t follow the maxim, “if JetBrains have blessed your language with attention, use their IDE”.
Visual Studio really is first class.
#3: for Rust as long as you have rust-analyzer and clippy, you’re good to go. Vim with lua and VS Code both work perfectly.
#4: This is definitely skirting the realm of opinion. It’s a great piece of software. There is better and worse stuff but it all depends upon the person using it, their skill, and style of development.
#5: VSCode is excellent for coding. I’ve been using it for about 6 years now, mainly for Python work, but also developing JS based mobile apps. I mainly use Visual Studio, but VSC’s slightly stripped back nature has been embellished with plenty of updates and more GUI discovery methods, plus that huge extensions library (I’ve worked with the creation of an intellisense style plugin as well).
I’m personally a fan of keeping it simple on IDEs, and I work in a lot of languages. I’m not installing 6 or 7 IDEs because they apparently have advantages in that specific language, so I’d rather install one IDE which can do a credible job on all of them.
I’m more a fan of developing software than getting anally retentive about knowing all the keyboard shortcuts to format a source file. Life’s too short for that. Way too short!
Dmitry Aliev is correct that this was introduced into the language before references.
I’ll take this question as an excuse to add a bit more color to this.
C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this pointer parameter to member functions. E.g.:
was translated to something like:
int f__1S(S *this);
(the funny name f__1S is just an example of a possible “mangling” of the name of S::f, which allows traditional linkers to deal with the richer naming environment of C++).
What might comes as a surprise to the modern C++ programmer is that in that model this is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:
Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:
That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this remained valid in constructors and destructors even though this had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.
When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:
In other words, the expression this is now effectively a kind of alias for &__this, where __this is just a name I made up for an unnamable implicit reference parameter.
C++11 further tweaked this by introducing syntax to control the kind of reference that this is bound from. E.g.,
That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this:
After that language feature was released, we started getting many reports of buggy programs that “captured” this thinking they captured the class value, when instead they really wanted to capture __this (or *this). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:
we introduced the ability to capture *this
we allowed [=, this] since now [this] is really a “by reference” capture of *this
even though [this] was now a “by reference” capture, we left in the ability to write [&, this], despite it being redundant (compatibility with earlier standards)
Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):
In that example, the “object parameter” (i.e., the previously hidden reference parameter __this) is now an explicit parameter and it is no longer a reference!
Here is another example (also from the paper):
Here:
the type of the object parameter is a deducible template-dependent type
the deduction actually allows a derived type to be found
This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.
It adds some extra steps in design, testing and deployment for sure. But it can buy you an easier path to scalability and an easier path to fault tolerance and live system upgrades.
It’s not REST itself that enables that. But if you use REST you will have split your code up into independently deployable chunks called services.
So more development work to do, yes, but you get something a single monolith can’t provide. If you need that, then the REST service approach is a quick way to doing it.
We must compare like for like in terms of results for questions like this.
Based on what I could find, the strtok library function appeared in System III UNIX some time in 1980.
In 1980, memory was small, and programs were single threaded. I don’t know whether UNIX had any support for multiple processors, even. I think that happened a few years later.
This was 3 years before they started the standardization process, and 9 years before it was standardized in ANSI C.
This was simple and good enough, and that’s what mattered most. It’s far from the only library function with internal state.
And Lex/YACC took over more complex scanning and parsing tasks, so it probably didn’t get a lot of attention for the lightweight uses it was put to.
For a tongue-in-cheek take on how UNIX and C were developed, read this classic:
The Rise of “Worse is Better” By Richard Gabriel I and just about every designer of Common Lisp and CLOS has had extreme exposure to the MIT/Stanford style of design. The essence of this style can be captured by the phrase “the right thing.” To such a designer it is important to get all of the following characteristics right: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the interface to be simple than the implementation. · Correctness-the design must be correct in all observable aspects. Incorrectness is simply not allowed. · Consistency-the design must not be inconsistent. A design is allowed to be slightly less simple and less complete to avoid inconsistency. Consistency is as important as correctness. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases must be covered. Simplicity is not allowed to overly reduce completeness. I believe most people would agree that these are good characteristics. I will call the use of this philosophy of design the “MIT approach.” Common Lisp (with CLOS) and Scheme represent the MIT approach to design and implementation. The worse-is-better philosophy is only slightly different: · Simplicity-the design must be simple, both in implementation and interface. It is more important for the implementation to be simple than the interface. Simplicity is the most important consideration in a design. · Correctness-the design must be correct in all observable aspects. It is slightly better to be simple than correct. · Consistency-the design must not be overly inconsistent. Consistency can be sacrificed for simplicity in some cases, but it is better to drop those parts of the design that deal with less common circumstances than to introduce either implementational complexity or inconsistency. · Completeness-the design must cover as many important situations as is practical. All reasonably expected cases should be covered. Completeness can be sacrificed in favor of any other quality. In fact, completeness must sacrificed whenever implementation simplicity is jeopardized. Consistency can be sacrificed to achieve completeness if simplicity is retained; especially worthless is consistency of interface. Early Unix and C are examples of the use of this school of design, and I will call the use of this design strategy the “New Jersey approach.” I have intentionally caricatured the worse-is-better philosophy to convince you that it is obviously a bad philosophy and that the New Jersey approach is a bad approach. However, I believe that worse-is-better, even in its strawman form, has better survival characteristics than the-right-thing, and that the New Jersey approach when used for software is a better approach than the MIT approach. Let me start out by retelling a story that shows that the MIT/New-Jersey distinction is valid and that proponents of each philosophy actually believe their philosophy is better.
Because the ‘under the hood’ code is about 50 years old. I’m not kidding. I worked on some video poker machines that were made in the early 1970’s.
Here’s how they work.
You have an array of ‘cards’ from 0 to 51. Pick one at random. Slap it in position 1 and take it out of your array. Do the same for the next card … see how this works?
Video poker machines are really that simple. They literally simulate a deck of cards.
Anything else, at least in Nevada, is illegal. Let me rephrase that, it is ILLEGAL, in all caps.
If you were to try to make a video poker game (or video keno, or slot machine) in any other way than as close to truly random selection from an ‘array’ of options as you can get, Nevada Gaming will come after you so hard and fast, your third cousin twice removed will have their ears ring for a week.
That is if the Families don’t get you first, and they’re far less kind.
All the ‘magic’ is in the payout tables, which on video poker and keno are literally posted on every machine. If you can read them, you can figure out exactly what the payout odds are for any machine.
There’s also a little note at the bottom stating that the video poker machine you’re looking at uses a 52 card deck.
Comments:
1- I have a slot machine and the code on the odds chip looks much like an excel spread sheet every combination is displayed in this spread sheet, so the exact odds can be listed an payout tables. The machine picks a random number. Let say 452 in 1000. the computer looks at the spread sheet and says that this is the combination of bar bar 7 and you get 2 credits for this combination. The wheels will spin to match the indication on the spread sheet. If I go into the game diagnostics I can see if it is a win or not, you do not win on what the wheels display, but the actual number from the spread sheet. The games knows if you won or lost before the wheels stop.
2- I had a conversation with a guy who had retired from working in casino security. He was also responsible for some setup and maintenance on slot machines, video poker and others. I asked about the infamous video poker machine that a programmer at the manufacturer had put in a backdoor so he and a few pals could get money. That was just before he’d started but he knew how it was done. IIRC there was a 25 step process of combinations of coin drops and button presses to make the machine hit a royal flush to pay the jackpot.
Slot machines that have mechanical reels actually run very large virtual reels. The physical reels have position encoders so the electronics and software can select which symbol to stop on. This makes for far more possible combinations than relying on the space available on the physical reels.
Those islands of machines with the sign that says 95% payout? Well, you guess which machine in the group is set to that payout % while the rest are much closer to the minimum allowed.
Machines with a video screen that gives you a choice of things to select by touch or button press? It doesn’t matter what you select, the outcome is pre-determined. For example, if there’s a grid of spots and the first three matches you get determines how many free spins you get, if the code stopped on giving you 7 free spins, out of a possible maximum of 25, you’re getting 7 free spins no matter which spots you touch. It will tease you with a couple of 25s, a 10 or 15 or two, but ultimately you’ll get three 7s, and often the 3rd 25 will be close to the other two or right next to the last 7 “you” selected to make you feel like you just missed it when the full grid is briefly revealed.
There was a Discovery Channel show where the host used various power tools to literally hack things apart to show their insides and how they worked. In one episode he sawed open a couple of slot machines, one from the 1960’s and a purely mechanical one from the 1930’s or possibly 1940’s. In that old machine he discovered the casino it had been in decades prior had installed a cheat. There was a metal wedge bolted into the notch for the 7 on one reel so it could never hit the 777 jackpot. I wondered if the Nevada Gaming Commission could trace the serial number and if they could levy a fine if the company that had owned and operated it was still in business.
3- Slightly off-topic. I worked for a company that sold computer hardware, one of our customers was the company that makes gambling machines. They said that they spent close to $0 on software and all their budget on licensing characters
This question is like asking why you would ever use int when you have the Integer class. Java programmers seem especially zealous about everything needing to be wrapped, and wrapped, and wrapped.
Yes, ArrayList<Integer> does everything that int[] does and more… but sometimes all you need to do is swat a fly, and you just need a flyswatter, not a machine-gun.
Did you know that in order to convert int[] to ArrrayList<Integer>, the system has to go through the array elements one at a time and box them, which means creating a garbage-collected object on the heap (i.e. Integer) for each individual int in the array? That’s right; if you just use int[], then only one memory alloc is needed, as opposed to one for each item.
I understand that most Java programmers don’t know about that, and the ones who do probably don’t care. They will say that this isn’t going to be the reason your program is running slowly. They will say that if you need to care about those kinds of optimizations, then you should be writing code in C++ rather than Java. Yadda yadda yadda, I’ve heard it all before. Personally though, I think that you should know, and should care, because it just seems wasteful to me. Why dynamically allocate n individual objects when you could just have a contiguous block in memory? I don’t like waste.
I also happen to know that if you have a blasé attitude about performance in general, then you’re apt to be the sort of programmer who unknowingly, unnecessarily writes four nested loops and then has no idea why their program took ten minutes to run even though the list was only 100 elements long. At that point, not even C++ will save you from your inefficiently written code. There’s a slippery slope here.
I believe that a software developer is a sort of craftsman. They should understand their craft, not only at the language level, but also how it works internally. They should convert int[] to ArrayList<Integer> only because they know the cost is insignificant, and they have a particular reason for doing so other than “I never use arrays, ArrayList is better LOL”.
Last time I needed to write an Android app, even though I already knew Java, I still went with Kotlin 😀
I’d rather work in a language I don’t know than… Java… and yes, I know a decent Java IDE can auto-generate this code – but this only solves the problem of writing the code, it doesn’t solve the problem of having to read it, which happens a lot more than writing it.
I mean, which of the below conveys the programmer’s intent more clearly, and which one would you rather read when you forget what a part of the program does and need a refresher:
Even if both of them required no effort to write… the Java version is pure brain poison…
If you have two books on the same subject, but one is skinny and the other is fat, go with the skinny one. For example:
The book on the left has 796 pages; the book on the right a mere 176. Yet the book on the right told us everything we needed to know to write our own, efficient, native-code-generating Plain English compiler in Plain English:
The Osmosian Order of Plain English Programmers Welcomes You
Program in a language you already know
https://osmosianplainenglishprogramming.blog/
Compare also the Inside Macintosh documentation before and after the Pascal programmers were replaced with C programmers:
Note that the whole set (green arrow) documenting the slim and trim Pascal system was the same size as a single volume (red arrow) of the bloated C version.
Because it’s insufficient to deal with the memory semantics of current computers. In fact, it was obsolete almost as soon as it first became available.
Volatile tells a compiler that it may not assume the value of a memory location has not changed between reads or writes. This is sometimes sufficient to deal with memory-mapped hardware registers, which is what it was originally for.
But that doesn’t deal with the semantics of a multiprocessor machine’s cache, where a memory location might be written and read from several different places, and we need to be sure we know when written values will be observable relative to control flow in the writing thread.
Instead, we need to deal with acquire/release semantics of values, and the compilers have to output the right machine instructions that we get those semantics from the real machines. So, the atomic memory intrinsics come to the rescue. This is also why inline assembler acts as an optimization barrier; before there were intrinsics for this, it was done with inline assembler. But intrinsics are better, because the compiler can still do some optimization with them.
C++ is a programming language specified through a standard that is “abstract” in various ways. For example, that standard doesn’t currently formally recognize a notion of “runtime” (I would actually like to change that a little bit in the future, but we’ll see).
Now, in order to allow implementations to make assumptions it removes certain situations from the responsibility of the implementation. For example, it doesn’t require (in general) that the implementation ensure that accesses to objects are within the bounds of those objects. By dropping that requirement, the code for valid accesses can be more efficient than would be required if out-of-bounds situations were the responsibility of the implementation (as is the case in most other modern programming languages). Those “situations” are what we call “undefined behaviour”: The implementation has no specific responsibilities and so the standard allows “anything” to happen. This is in part why C++ is still very successful in applications that call for the efficient use of hardware resources.
Note, however, that the standard doesn’t disallow an implementation from doing something that is implementation-specified in those “undefined behaviour” situations. It’s perfectly all right (and feasible) for a C++ implementation to be “memory safe” for example (e.g., not attempt access outside of object bounds). Such implementations have existed in the past (and might still exist, but I’m not currently aware of one that completely “contains” undefined behaviour).
ADDENDUM (July 16th, 2021):
The following article about undefined behavior crossed my metaphorical desk today:
Coding is a process of translating and transforming a problem into a step by step set of instructions for a machine. Just like every skill, it requires time and practice to learn coding. However, by following some simple tips, you can make the learning process easier and faster. First, it is important to start with the basics. Do not try to learn too many programming languages at once. It is better to focus on one language and master it before moving on to the next one. Second, make use of resources such as books, online tutorials, and coding bootcamps. These can provide you with the structure and support you need to progress quickly. Finally, practice regularly and find a mentor who can offer guidance and feedback. By following these tips, you can develop the programming skills you need to succeed in your career.
There are plenty of resources available to help you improve your coding skills. Check out some of our favorite coding tips below:
– Find a good code editor and learn its shortcuts. This will save you time in the long run. – Do lots of practice exercises. It’s important to get comfortable with the syntax and structure of your chosen programming language. – Get involved in the coding community. There are many online forums and groups where programmers can ask questions, share advice, and collaborate on projects. – Read code written by experienced developers. This will give you insight into best practices and advanced techniques.
You deceitful liar, you teller of untruths, you phony fibster — how can something be same yet different? Being the same is the same as…Continue reading on Level Up Coding »
I’ve always been fascinated by programming languages because I work as a software developer. These are the instruments we employ in order…Continue reading on Medium »
Explore validating Shiny for Python dashboards with Playwright. Ensure reliable data analysis through end-to-end testing.Continue reading on Appsilon »
Circle’s Web3 Services offer a suite of tools designed to streamline development within the Web3 space. One of the most exciting offerings…Continue reading on Medium »
You can also view the whole project in stackblitz https://stackblitz.com/edit/js-otpixp?devToolsHeight=33&embed=1&file=index.js&theme=darkContinue reading on Medium »
I’m learning about Object-Oriented Programing(OOP) and data structure by python. Python doesn’t have primate type. Its like class or…Continue reading on Medium »
In the realm of Artificial Intelligence development, the phrase “It’s just code, right?” often oversimplifies the intricate journey…Continue reading on Crayon Data & AI »
It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.
Second, pagination is generally a function of the front-end and/or middleware, not the database layer.
But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.
That may not be the most efficient or effective implementation, though.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.
Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.
What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?
I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.
If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.
With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.
More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)
It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.
I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.
But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?
If it does the full fetch every time, then it seems quite inefficient.
If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?
something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
YAGNI principle – the programmer should not add functionality until deemed necessary. Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.
From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.
And some theory – see this answer to know what is purpose of Pagination pattern
¿Alguna vez has trabajado con varias bases de datos y te ha tocado abrir MySQL Workbench, PgAdmin, SQL Server Management Studio, o MongoDB…Continue reading on Medium »
Have you ever wondered how a website ensures your username is unique, or how an e-commerce platform automatically updates your shopping…Continue reading on Medium »
Introduction.
In the wake of ChatGPT and other large language models (LLMs) gaining prominence, the fascination with Retrieval Augmented…Continue reading on Medium »
Tech Jobs and Career at FAANG (now MAANGM): Facebook Meta Amazon Apple Netflix Google Microsoft
The FAANG companies (Facebook, Amazon, Apple, Netflix, Google, and Microsoft) are some of the most sought-after employers in the tech industry. They offer competitive salaries and benefits, and their employees are at the forefront of innovation.
The interview process for a job at a FAANG company is notoriously difficult. Candidates must be prepared to answer tough technical questions and demonstrate their problem-solving skills. The competition is fierce, but the rewards are worth it. Employees of FAANG companies enjoy perks like free food and transportation, and they often have the opportunity to work on cutting-edge projects.
If you’re interested in a career in tech, Google, Facebook, or Microsoft are great places to start your search. These companies are leaders in their field, and they offer endless opportunities for career growth.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
FAANG stared as FANG circa 2013. The 2nd A became customary around 2016 as it wasn’t clear whether A referred to Apple or Amazon. Originally, FANG meant “large public, fast growing tech companies”. Now in 2021, the scope of what FANG referred to just doesn’t correspond to these 5 companies.
From an investment perspective (which is the origin of FANG) Facebook stock has grown the slowest of the 5 companies over the past 5 years. And they’re all dwarfed by Tesla.
From an employment desirability perspective (which is the context where FAANG is most used today). Microsoft is very similar to the group. It wasn’t “cool” around 2013 but its stock actually did better than Facebook or Alphabet over the past five years. Other companies like Airbnb, Twitter or Salesforce offer the same value proposition to employees, that is stability and tradable equity as part of the compensation.
FAANG refers to a category more than a specific list of companies.
As a side note, I expect people to routinely call the company Facebook, just like most people still say Google when they really mean Alphabet.
The technical interviews at FAANG companies, in the grand scheme, aren’t very difficult.
People frequently fail FAANG interviews because they choke — they experience anxiety and just forget their knowledge — or they don’t know the material to begin with.
Inverting a binary tree, matching up pairs of brackets, finding the duplicate in an array of distinct integers, etc., are all weeder-questions that should be solvable in 5–10 minutes, if you’re the type to suffer from interview jitters. You should know which data structures to use, intuitively, and you should be doing prep work to cover your knowledge gaps if you don’t.
Harder questions will take longer, but ultimately, you’ll have 45 minutes or so to solve 2–3 questions.
Technical interviews at FAANG companies are only difficult if you have shaky computer science fundamentals. Luckily, the process for cracking the code interview *cough* is very well-documented, hence, you only need to follow the already established strategies. If you’re interested in maximizing income while prioritizing career growth, it behooves you to spend a month or two studying these strategies.
In FAANG interview process, when you fail at the 1st (or 2nd stage), does it mean that single interviewer on the respective stage failed you, or is it still team collaboration /hiring manager decision?
If you were dropped after doing a single interview (usually called a “screen”) it means that this interviewer gave negative feedback. I would guess at some companies this feedback is reviewed by the hiring manager, but mostly I think a recruiter will just reject if the interviewer recommends no hire. Even if a hiring manager looks at it, they would probably reject almost always if the feedback is negative. The purpose of the screen is to quickly evaluate if a person is worth interviewing in depth.
If you were rejected after a whole interview panel, probably a hiring manager or similar did look at the entire feedback, and much of the time there was a discussion where interviewers looked at the entire feedback as well and shared their thoughts. However, if the feedback was clearly negative, it could’ve been just a snap decision by a manager without much discussion. Source.
What do you do after you absolutely flop a technical interview?
Take care of yourself / don’t beat yourself up.
It happens. It happened to me, it happened to smarter people. It’s ok.
Two thoughts to help here –
Getting to the interview stage is already a huge achievement. If you are interviewed, this means that in the expert opinion of the recruiters, people that did tech screens etc. you stand a chance to pass the interview. You earned your place in the interviewee seat. This is an accomplishment you can be proud of.
The consequences are probably* negligible in the long run. There’s at least 100 very desirable tech companies to work at at a given moment. You didn’t get in 1% of them at a moment in time. Big deal. You can probably retry in a few months. It’s very likely that you get an equivalent or even better opportunity, and there’s no use imagining what would have happened if you had had that job. (*“probably” because if you’re under time pressure to get a job rapidly… it may sting differently. But hey, there’s still the first thought).
As a bonus, you’ll probably remember very well the question on which you failed. Source: Jerome Cukier
If an interviewer says “we’re still interviewing other candidates at the moment”, and then walks you out into the lobby, does that mean they want to hire you potentially after or no?
Here’s a secret. I have been a recruiter for 24 years and when they walk you out after your interview and tell you that they are still interviewing other candidates at the moment, it really means they’re still interviewing other candidates at the moment. There’s no secret language here to try to interpret. It means what it means. You will have to wait for them to tell you what next steps are for you because, again, they have other people to interview. By Leah Roth
The difficulty of the interview is going to vary more interviewer to interviewer, than company to company. Also, how difficult the questions are is not directly related to how selective the process is; the latter being heavily influenced by business factors currently affecting these companies and what are their current hiring plans.
Comments:
#1: So, how do know you this? You don’t. An affirmative answer to this question can only come from data.
#Answer #1: Fair question. I have been very involved in interviewing in a number of large tech cos. I have read, by now, thousands of interview debriefs. I have also interviewed a fair amount as a candidate, although I have not interviewed in each of the “FAANG” and I have definitely be more often on the interviewing side.
As such, I have seen for the same position, very easy questions and brutally difficult ones; I have seen very promising candidates not brought to onsite interviews because the hiring organization didn’t currently have resources to hire, but also ok-ish candidates given offers because the organization had trouble meeting their hiring targets. As a candidate I also experienced: easy interview exercises but no offer, very hard interview exercises and offer (with the caveat that I never know exactly how well I do, but I certainly can tell if a coding question or a system design question is easy or hard).
So. I am well aware that it’s still anecdotal evidence, but it’s still based on a fairly large sample of interviews and candidates.
#Reply to #1: Nope, you’re wrong. I have experience in the interview process at Amazon and Microsoft and have a different conclusion. Moreover, “experts” in lots of disparate fields make claims that are a bunch of bullcrap due to their own experiential biases. Additionally, you would need to be involved at all of the companies listed, not just some of the them, for that experience to be relevant in answering this question. We need to look at the data. If you don’t have data, I will not trust you just because of “your experience”. I don’t think it’s possible for Jerry C to have the necessary information to justify the confidence that is projected in this answer.
What you need is not so much a list of “incidents” but more generally some self-awareness on what you care about and how you’ve progressed and how you see your career.
The best source for this material is your performance reviews. Ideally you also kept some document about your career goals and/or conversation with your manager. (If you haven’t such documents, it’s never too late to start them!).
You should have 5–6 situations that are fairly recent and that you know on the back of your hand. These must include something difficult, and some of these situations must be focused on interpersonal relationships (or more generally, you should be aware of more situations that involved a difficult interpersonal relation). They may or may not have had a great outcome – it’s ok if you didn’t save the day. But you should always know the outcome both in terms of business and on your personal growth.
Once you have your set of situations and you can easily access these stories / effortlessly remember all details, you’ll find it much easier to answer any behavioural question.
In a software engineering interview, How should one answer the question, ‘Could you tell me about some of the technical challenges in your previous projects’?
To take a few steps back, there are 2 things that interviewers care about in behavioural interviews – whether the candidate has the right level, and whether they exhibit certain skillsets.
When you look at this question from the first angle, it’s important to be able to present hard problems on which it’s clear what the candidate’s personal contribution was. Typically, later projects are better for that than earlier ones.
Now, in terms of skillsets, this really depends company by company but typically, how well a candidate is able to describe a problem especially to someone with a different expertise, and whether they spontaneously go on to describe impact metrics, goes a long way.
So great answer: hard, recent, large scale project, that the candidate is able to contextualize (why was is important, why was it hard, what was at stake), where they are able to describe what they’ve done and what was the potential impact, and what were the actual consequences.
Not so great answer: a project that no one asked the candidate to do, but which they insisted on doing because they thought it was cool/interesting, on which they worked alone and which didn’t have any business impact. Source.
This question (like many other things in life) is much more complicated than it appears on the surface. That’s because it is conflating several very different issues, including:
What is retirement?
What is “early”?
At what age do most software engineers stop working in that role?
How long do employees stay on average at the FAANGs?
In the “old” days (let’s arbitrarily call that mid-20th century America), the typical worker was white, male and middle class, employed on location at a job for 40–50 hours a week. He began his working career at 18 (after high school) or 22 (after college), and worked continuously for a salary until the age of 65. At that time he retired (“stopped working”) and spent his remaining 5–10 years of life sitting at home watching tv or traveling to places that he had always wanted to visit.
That world has, to a large extent, been transmogrified over the past 50 years. People are working longer, changing employment more frequently, even changing careers and professions as technology and the economy change. The work force is increasingly diverse, and virtually all occupations are open to virtually all people. Over the past two years we have seen that an astonishing number of jobs can be done remotely, and on an asynchronous basis. And all of these changes have disproportionately affected software engineering.
So, let’s begin by laying out some facts:
When people plan to retire is a factor of their generation: Generation Y — ages 25 to 40 — plans to retire at an average age of 59. For Generation X — now 41 to 56 — the average age is 60. Baby boomers — who range from 57 to 75 — indicated they plan to work longer, with an average expected retirement age of 68.[1]
The average actual retirement age in the US is 62[2]
Most software engineers retire between the ages of 45 and 65, with less than 1% of developers working later than 65.[3]
But those numbers are misleading because many software engineers experience rapid career progression and move out of a pure development role long before they retire.
The average life expectancy in Silicon Valley is 85 years.[4]
The tenure of employment at the FAANGs is much shorter than than one might imagine. Unlike in the past, when a person might spend his or her entire career working for one or two employers, here are the average lengths of time that people work at the FAANGs: Facebook 2.5 years, Google 3.2 years, Apple 5 years.[5]
Therefore, if the question assumes that a software engineer gets hired at a FAANG company in his or her 20s, works there for 20 or 30 years as a coder, and then “retires early”, that is just not the way things work.
Much more likely is the scenario in which an engineer graduates from college at 21, gets a masters degree in computer science by 23, starts as a junior engineer at a small or large company for a few years, gets hired into a FAANG by their early 30s, spends 3–5 years coding there, is recruited to join a non-FAANG by their early 40s in a more senior role, and moves into management by their late 40s.
At that point things become a matter of personal preference: truly “retire”, start your own venture; invest in cryptocurrency; move up to senior management; begin a second career; etc.
The fact is that software engineering at a high level (such as would warrant employment at a FAANG in the first place) pays very well in relative terms, and with appropriate self-control and a moderate lifestyle would enable someone to “retire” at a relatively early age. But paradoxically, that same type of person is unlikely to do so.
Are companies like Google and Facebook heaven on earth in terms of workplaces?
No. In fact Google’s a really poor workplace by comparison with most others I’ve had in my career. Having a private office with a door you can close is a real boon to doing thoughtful, creative work, and having personal space so that you can feel psychologically safe is important too.
You don’t get any of that at Google, unless you’re a director or VP and your job function requires closed-door meetings. I have a very nice, state-of-the-art standing desk, with a state-of-the-art monitor, and the only way for me to avoid hearing my tech lead’s conversations is to put headphones on. (You can get very nice, state-of-the-art headphones, too.)
On the other hand, I also have regular access to great food, and an excellent gym, and all the La Croix water I can drink. I get to work on the most incredible technological platform on earth. And the money’s good. But heaven on earth? Nah. That’s one of the reasons the money’s good.
What is the starting salary of a software engineer at Google?
A new grad software engineer (L3) at Google makes a salary around $193,000 including stock compensation and bonus. The industry is getting a lot more competitive and top companies such as Google have to make offers with really generous stock packages. The below diagram shows a breakdown for the salary. View all the crowdsourced reports as well as other levels on Levels.fyi.
Hope that helps!
What is the best Google employee perk, and why?
Having recent left Google for a new startup I have to agree that the most-missed perk is the food. It’s not so much that it’s free — you can get lunch for about $10 per day so the cost is not a huge deal. There is simply nowhere you can go, even in a Silicon Valley city like Mountain View, that has healthy low-fat, varied choices that include features like edible fruits and vegetables. The food is even color-coded (red/yellow/green) based on how healthy it is (it always bothered me that the peanut-butter cups are red….).
Outside of Google you end up having muffins for breakfast and pizza for lunch. It tastes good but it’s not the same to your body.
But beyond just the food, the long term health impact of the set of perks at Google is huge. There is nothing better than being able to come in early, work out at the (free) gym by your office, shower (with towels provided as noted by others), then have eggs (or egg whites if you prefer) and toast (or one of a dozen other breakfasts). Source
Everyone has a study plan and list of resources they like to use. Different plans work for different people and there is no one size fits all.
This by no means is the only list of resources to join a larger technology company. But it is the list of resources I used myself to prepare for all my technology interviews.
Quick Background
I’m a current engineer at Microsoft who previously worked at Amazon for 1 year each respectively. I don’t have a master’s degree and I graduated from NYU, not an Ivy League. I’ll soon be joining Google and the following resources is how I got there.
Yes, the purchasable resources are affiliate links that help support this blog. Regardless, these are the resources I’ve used both purchasable and free.
This is the simplest book to get anyone started in studying for coding interviews.
If you’re an absolute beginner, I recommend you to start here. The questions have very details explanations that are easy to understand with basic knowledge of algorithms and data structures.
Elements of Programming Interviews (Python, Java, C++)
If you’re a little more experienced, every question in this book is at the interviewing level of all large technology companies.
If you’ve mastered the questions in this book, then you are more than ready for the average technology interview. The book is not as beginner friendly as CTCI but it does include a study plan depending on how much you need to prepare for your interviews. This is my personal favorite book I carried everywhere in university.
Blind has a list of 75 questions that is generally enough to solve most coding interviews. It’s a very curated and focused list for the most essential algorithms to leverage your time.
The playlist above is one of the clearest explanations I’ve ever seen and highly recommend if you need an explanation on any of the problems.
These problems are hard. Really hard for anyone who hasn’t practiced algorithms and is not beginner friendly. But if you are able to complete the sorting and searching section, you will be more capable than the average LeetCode user and be more than ready for your coding interview.
Consider this if you’re comfortable with LeetCode medium questions and find the questions in CTCI too easy.
This is the most common and best textbook anyone could use to learn algorithms. It’s also the textbook my university used personally to learn the core and essential algorithms to most coding problems.
The 4th edition was recently released and is still relevant to MIT students. If you need structure and a traditional classroom setting to study, follow MIT’s algorithm course here.
Graph theory does come up in interviews (and was a question I had at both Bloomberg and Google). Stay prepared and follow William Fiset’s graph theory explanation.
The diagrams are comprehensive and the step-by-step explanations are the best I’ve ever seen on the topic.
This handbook is for people who are strongly proficient with most Leetcode algorithms. It’s a free resource that strongly complements the CSES.fi curriculum.
For the most experienced algorithm enthusiasts, this book will cover every niche data structure and algorithm that could possibly be asked in any coding interview. This level of preparation is not generally needed for FAANG type companies but can show up if you’re considering hedge fund type companies.
In my opinion, you will be more than ready for any system design interview using these resources. The diagrams are clear and the explanations are as simple as possible in each book to help you learn system design concepts quickly.
I recommend the online course personally because yes the content from both books is great to own, it’s the online community discord you get access to that makes the yearly subscription worth it. The discord includes mock interview buddies, salary discussion, and over view on each system design topics to study with other users on.
The system design primer is the best free resource on all things system design. Dig deep into the Git repository and you will learn everything you need to know on system design. It’s all curated in a single repository and the clearly structured to give you a guided curriculum.
This quick overview on system design is great to review if you’re in a rush. The read typically takes users 45 minutes but you’ll be left knowing more system design than the average engineer.
Give it a read. If concepts are unclear or confusing, that might be a sign you’re not ready for interviews.
Regardless if you’re learning design patterns for the object-oriented programming interview, you will need to know design patterns as a software engineer at these large companies.
The book is the origin of the world’s most common design patterns today and showing proficiency in these for your object oriented interview is a requirement for certain large technology companies like Amazon.
The above resource is dense and written in language that’s hard to understand. While the original source material in design patterns is great, it doesn’t help much if it’s difficult to understand.
Consider Head First Design patterns to study a simplified explanation of those common design patterns. It might not be as in-depth as the original source material, but your understanding in design patterns will be more than enough to crack any object-oriented interview.
Closing Thoughts
Honestly, I did not go through all of these resources from cover to cover. If you do, I’m sure you wouldn’t need to study for another interview again. But likely we don’t have the time to do that so make sure that once you understand the core concepts in the any of the above categories that you invest your time moving on to the next.
Again, these are the resources I used and is not at all inclusive of anyone else’s study plan.
3 Years ago I applied to Google and was rejected immediately after the phone screen. Fast forward 2022 and was given another chance to re-interview. Here’s how the entire experience went.
Quick Background
I am currently a junior level software engineer at Microsoft (L60) with previous experience at Amazon (SDE I). My tenure is 1 year at Microsoft and 1 year at Amazon.
The first time I applied to Google was fall of my senior year of college at NYU. I failed the phone screen horribly and never thought I would join a company as competitive as Google. But I did not want to count myself out before even interviewing.
Recruiter Screen
I slowly built my LinkedIn to make sure recruiters would notice me whenever I wrote a LinkedIn post. With 15,000 followers at the time, it wasn’t too difficult to have one of them reach out with the chance to interview. A message came in my LinkedIn inbox and I responded promptly to schedule the initial recruiter call.
The chat was focused more on my previous experiences engineering and some of the projects I worked on. It was important to talk about what languages I was using and how much of my day was spent coding (70% of my day at Microsoft).
The recruiter was interested in having me follow through with a full-loop and asked when I would like to go through the process. It was important to me to ask what engineering level I was applying for. He shared it was L3/L4 role where the interviews would calibrate me depending on my performance. Knowing that, I mentioned I’d like to interview 1 month later and asked what the process looked like as explained to me.
Technical Phone Screen
6 Hour Virtual On-site a. 4 Technical Coding Interviews or 3 Technical Coding Interviews + 1 system design b. Behavioral “Googliness” interview
Phone Screen
Following the initial recruiter phone screen, I received an email from Google. It explained that I would be exempt from the Google Technical Phone Screen.
Why? I am personally not sure but it likely had to do with prior experience at large technology companies. I was personally surprised because to this day my first Google Phone Screen is still one of the toughest coding interviews I have ever been given.
It looked like that was as relevant as my current work experience and I didn’t have much to complain about moving quicker through the process and directly on-site.
Technical Onsite
Every coding question I had was a coding question that was either on LeetCode or could be solved with the patterns you find solving coding questions. Here’s what my experience for each of them looked like
Coding Interview #1
The interviewer looked like someone who was my age and likely joined Google directly after university. Maybe I wasn’t jealous. Maybe I was.
The question I was given was a string parsing Hash-Map question. Easily doable if you worked through a few medium questions regarding hash-maps and string parsing. But if you’re not careful, you may have fallen into a common trap.
Let me point it out for you. Abstract away the logic for tedious parsing logic by writing something like “parsingFunction()”. Otherwise 30 minutes may pass without you solving the question. I wrote a short “ToDo” mentioning I’d come back to it if the interviewer cared.
Spoiler: The interviewer didn’t care.
They lastly asked me to optimize with a heap and what the running time was. Unlike others who assert the running time, I solved for it and the interview concluded there.
Coding Interview #2
The interviewer who was more senior than the previous interviewer. I heard the coding question and thought the on-site was over.
The thing about some coding questions is whether you see the pattern for the algorithm or not. The recognizing the pattern for the algorithm can be much more difficult than actually writing the code for it. This was one of those interviews.
After hearing the questions I was thinking of ways to brute force the question or if there was a pattern I could see using smaller test cases. I wasn’t able to recognize it and eventually the interviewer told me what the pattern was.
I tried not to come off embarrassed but followed up with the algorithm to implement that pattern and the interviewer gave me the “go ahead” to code. I finished coding the pattern and answer the follow up by the interviewer on how to make my code modular to handle another requirement. This did not require implementation.
Afterwards was a discussion on time and space complexity and the interview was over.
Coding Interview #3
The interviewer was a mid-level engineer who was not as keen on chatting as much as the interviewers.
Some coding interviews are just one interview where you have to get the question correct or not. This one started off easy and iterated to be tougher.
My quick advice to anyone is to never come off arrogant for any coding question. You may know the question is easy and the interviewer likely does as well. Often times it’ll get harder and all that ego will go out the window. Go through the motions and communicate you always do for any other coding problem.
The problem given was directly on LeetCode and I felt more comfortable knowing I had solved this awhile ago before. If you’re familiar with “sliding window” then you more than likely would be able to solve it. But here’s where the challenge was.
After the warm-up question, the follow up had another requirement on top of the previous question. That follow up was more array manipulation. Finally the last iteration was shared.
I implemented the algorithm where Math.max was being called more than necessary. To me it didn’t affect the output of the algorithm and looked like it didn’t matter. But it mattered to the interviewer. I took that feedback and carefully implemented it the way the interviewer asked me to (whether it actually affected the algorithm or not).
Time and space complexity was solved and the interview was over.
Coding Interview #4
This was another interviewer who had joined Google after university and had the same work experience I did.
This prompt was not given to me and I expected I had to write down the details to the question myself. After asking some clarifying questions on what was and wasn’t in scope, I shared my algorithm.
The question was an object-oriented question to implement a graph. If you had taken any university course on graph theory, you would be more than prepared.
The interesting discussion was whether I had to implement the graph with BFS or DFS and explain the pro’s and con’s of each. Afterwards, I decided with BFS (because BFS is easier for me to implement) and the requirement followed up with taking K-most steps iterative.
I’m not sure if that’s the follow-up because I implemented it in BFS or if that was always the follow-up but I quickly adjusted the algorithm and solved for space and time complexity as always.
The Googliness interview
Googliness is just Google’s behavioral interview. Most questions were along the lines of
Tell me about yourself
What’s a project you worked on?
When was a time you implemented a change?
When was a time you dealt with a coworker who wasn’t pulling their weight?
To prepare for these, I’d recommend learning about the STAR format and outlining your work experiences if you can recall them before interviewing.
This seemed to go well but then I was given a question I didn’t expect. A product question and my thought process on how to work with teammates to answer the question.
My key point of advice: Nothing matters if the user doesn’t want it.
Emphasize how important user research is to build a product that a user will use otherwise everyone’s time could be better invested in other initiatives. Avoid jumping straight into designing the product and coordinating talks with product managers and UX designers.
Offer
2 weeks later, an informal offer was shared with me in my email.
Most of the interview didn’t not pertain to my previous experience directly. A systematic way of approaching, communicating, and implementing coding problems is enough without experience from Amazon/Microsoft.
That means you interviewed well. Someone else interviewed better for the first role, but the recruiter sees that there other roles for which you might be a better fit.
The eight interviews is a sign that someone in the process wanted you specifically for some role.
I think there may be two different things going on.
First, are you sure whether it’s a FAANG recruiter, or someone from an external sourcing firm which is retained by a FAANG company? I had this experience where someone reached out on LinkedIn and said they were recruiting for a Google role and passed along a job description. As I started asking them questions, it became clear that they just wanted me to fill out an application so that they can pass it to someone else. Now, as it happens, I am a former Google employee, so it quickly became clear that this person was not from Google at all, but just retained to source candidates. The role they wanted me to apply for was not in fact suitable, despite their claim that they reached out to me because I seemed like a good match.
If you are dealing with a case like this, probably what happens is that they source very broadly, basically spamming people, on the chance that some of the people they identify will in fact be a good fit. So they would solicit a resume, pass it to someone who is actually competent to judge, and that person would reject. And the sourcing firm will often ghost you at this point.
If you are dealing with an actual internal recruiter, I think it can be a similar situation. A recruiter often doesn’t really know if you are a fit or not, and it will often be some technical person who decides. That person may spend 30 seconds on your resume and say “no”. And positions get filled too, which would cause everyone in the pipeline to become irrelevant.
In such cases there is no advantage for the recruiter to further interact with you. Now, every place I worked with, I am pretty sure, had a policy that if a recruiter interacted with the candidate at all, they were supposed to formally reject them (via email or phone). But I imagine there’s very little incentive for a recruiter to do it, so they often don’t. And as a candidate, you don’t really have any way to complain about it to the company, unless you have a friend or colleague on the inside. If you do, I suggest you ask them, and it may do some good, if not to you (you are rejected either way), at least to the next applicant.
It’s not actually a line of code, so to speak, but lines of code.
I work in Salesforce, and for those who are not familiar with its cloud architecture, a component from QA could be moved to production only if the overall test coverage of the production is 75% or more. Meaning, if the total number of lines of code across all components, including the newly introduced ones, is 10000, enough test classes must be written with appropriate test scenarios so as to cover at least 7500 lines of the lump. This rule is enforced by Salesforce itself, so there’s no going around it. Asserts, on the other hand, could be done without.
If the movement of your components causes a shift in balance in production and tips its overall coverage to below 75%, you are supposed to work on the new components and raise their coverage before deployment. A nightmare of sorts, because there is a good chance your code is all clean and the issue occurs only because of a history of dirty code that had already gone in over years to drag the overall coverage to its teetering edges.
Someone in my previous company found out a sneaky way to smuggle in some code of his (or hers) without having to worry about this problem.
So this is simple math, right? If you have got 5000 lines of code, 3750 must be covered. But what if I have managed to cover only 2500 (50%) and my deadline is dangerously close?
Simple. I add 5000 lines of unnecessary code that I can surely cover by just one function call, so that the overall line number now is 10000 and covered lines are 7500, making my coverage percentage a sweet 75.
For this purpose they introduced a few full classes with a lone method in each of them. The method starts with,
Integer i = 0;
and continues with a repetition of the following line thousands of times.
i++;
And they had the audacity to copy and paste this repetitive ‘code’ throughout a bulky method and across classes in such a reckless manner that you could see a misplaced tab in first line replicated exactly in every 100th line or so.
Now all that is left for you to do is call this method in a test class, and you can cover scores of lines without breaking a sweat. All the code that actually matters may lie untested in automated coverage check, glaring red if one should care to take a look at, but you have effectively hoodwinked Salesforce deployment mechanism.
And the aftermath is even crazier. Seeing the way hoards of components could be moved in without having to embark on the tedious process of writing test classes, this technique acquired a status equivalent to ‘Salesforce best practices’ in our practice. In almost all the main orgs, if you search for it, you can find a class with streams of ‘i++;’ flowing along the screen for as far as you have the patience to scroll down.
Well, these cloaked dastards remained undetected for years before some of the untested scenarios started reeking. More sensible developers fished out the ‘i++;’ classes, raised the alarm and got down to clean up the mess. Just removing those classes drove the overall production coverage to abysmal low, preventing any form of interaction with production. What can I say, that kept many of us busy for at least a month.
I wouldn’t call the ‘developers’ that put this code in dumb. I would rather go for ‘wicked’. The higher heads and testers who didn’t care to look while this passed under their noses do qualify as dumb.
And the code… Man, that’s the dumbest thing I’ve ever seen.
If you are in the pipeline and you have interviews scheduled, then your recruiter will know exactly what loop will be set up for you and what kind of questions you may have. Recruiters try to get their candidates all the information they need to approach the interviews at the top of their potential, so ask the everything you need to know.
The actual answer depends on the candidate level and profile, the composition of the interviews is pretty much bespoke.
Dev: Alright, let the competition begin! Startup A: We will give you 50% of the revenue! Startup B: To hell with it, we will give you 100%! Startup A: Eh… we will give you 150%!
TL;DR: Nearly impossible. If you are a Google-sized company, of course. Totally impossible in other cases.
I run an outsourcing company. Our statistics so far:
500 CVs viewed per month
50 interview invitations sent per month
10 interviews conducted per month
1 job offer made (and usually refused) per month
And here we are looking for a mid-level developers in Russia.
Initially we wanted to hire some top-notch engineers and were ready to pay “any sum of money that would fit on the check”. We sent many invitations. Best people laughed at us and didn’t bother. Those who agreed – knew nothing. After that we had to shift our expectations greatly.
Still, we manage to find good developers from time to time. None of them can be considered super-expert, but as a team they cooperate extremely effectively, get the job done and all of them have that engineering spirit and innate curiosity that causes them to improve.
It takes constant human effort to keep sites like Google and Gmail online. Right now a Google engineer is fixing something that no one will ever know was broken. Some server somewhere is running out of memory, a fiber link has gone down, or a new release has a problem and needs to be rolled back. There are careful procedures, early warnings, and multiple layers of redundancy to ensure that problems never become visible to end users, but.
Sometimes problems do become visible but not in a way that an individual user can attribute to the site. A request might not get a prompt response, or any at all, but the user will probably blame the internet or their computer, not the site. Google itself is very rarely glitchy, but services like image search do sometimes have user visible problems.
And then of course, very rarely, a giant outage brings down something giant like YouTube or Google Cloud. But if it weren’t for an army of very smart, very diligent people, outages would happen much more often.
It’s what they don’t understand. 10x software engineers don’t really understand their job description.
They tend to think all these other things are their responsibility. And they don’t necessarily know why they’re doing all these other things. They just sense that it’s the right thing to do. If they spot something is wrong, they will just fix it. Sometimes it even seems like they’re not in control of what they do. It’s like a conscientiousness overdose.
10x engineers are often all over the code base. It is like they had no idea they were just part of one eng team.
I don’t think the premise behind the question is entirely true. These companies rely completely on programming problems with junior candidates that are not expected to have significant experience . Senior candidates do, in fact, get assessed based on their experience, although it might not always feel like it.
Let me illustrate this with an interview process I went through when interviewing for one of the aforementioned companies (AFAIK it’s typical for all the above). After the phone screen, there was a phone site interview with 5 consecutive interviews – 2 whiteboard coding + 2 whiteboard architecture problems + 1 behaviour interview. On the surface, it looks like experience doesn’t play a part, but, SURPRISE, experience and past projects play part in 3 interviews out of 5. A large part of the behavioural interview was actually discussing past projects and various decisions. As for the architecture problems – it’s true that the problem discussed is a new one, but those are essentially open ended questions, and the candidates experience (or lack thereof) clearly shines through. Unlike the coding exercises, these questions are almost impossible to solve without tackling something similar in the past.
Now, here a few reasons to why the emphasis is still on solving new problems and not diving into the candidates home territory, in no particular order:
Companies do not want to pass over strong candidates that just happen to be working on some boring stuff.
Most times companies do not want to clone a system that the candidate has worked on, so the ability to learn from experience, and apply it to new problems is much more valuable.
When the interviewer asks different candidates to design the same system, they can easily compare different candidates against one another. The interviewer is also guaranteed to have a deep understating of the problem they want the candidate to solve.
People can exaggerate (if not outright lie) their role in working on a particular project. This might be hard to catch-on in one hour, so it’s to avoid in the first place.
(This one is a minor concern, but still) Large companies hire by committee, where interviewers are gathered from the whole company. The fact that they shouldn’t discuss previous projects, removes the need to coordinate on questions, by preventing a situation where two interviewers accidentally end up talking about the same system, and essentially doing the interview twice.
Originally Answered: What can I, currently 17 years old, do to become an engineer/entrepreneur like Elon Musk?
This is a quick recap of my earlier response to a similar question on Quora:
I would recommend that you take a close look at the larger scheme of things in your life, by spending some time and effort to design your life blueprint, using Elon Musk as your inspiration and/or visual model.
By the way, here’s my quick snapshot of his beliefs and values:
1) Focus on something that has high value to someone else;
2) Go back to first principles, so as to understand things more deeply and widely, especially their implications;
3) Be very rigourous in your own self analysis; constantly question yourself, especially on the practicality of the idea(s) you have;
4) Be extremely tenacious in your pursuits;
5) Put in 100 hours or more every week, as sweat equity of intense efforts and focused execution count like hell;
6) Constantly think about how you could be doing better, faster, cheaper and smarter;
7) Relentlessly and ruthlessly think about how to make a better world;
Again, here’s my quick snapshot of his unique traits and characteristics:
ix) spiritual development (including contributions to society, volunteering, etc.);
2) Translate all your long-range goals and objectives in (1) into specific, prioritised and executable tasks that you need to accomplish daily, weekly, monthly, quarterly and even annually;
3) With the end in mind as formulated in (1) and (2), work out your start-point, endpoint and the developmental path of transition points in between;
4) Pinpoint specific tasks that you need to accomplish at each transition point till the endpoint;
5) Establish metrics to measure your progress, or milestone accomplishments;
6) Assign and allocate personal accountability, as some tasks may need to be shared, e.g. with team members, if any;
7) Identify and marshal resources that are required to get all the work done;
[I like to call them the 7 M’s: Money; Methods; Men; Machines; Materials; Metrics; and Mojo!]
8) Schedule a timetable for completion of each predefined task;
9) Highlight potential problems or challenges that may crop up along the Highway of Life, as you traverse on it;
10) Brainstorm a slew of possible strategies to deal with (9);
This is your contingency plan.
11) Institute some form of system, like a visual Pert Chart, to track, control and monitor your forward trajectory, as laid out in your systematic game plan, in conjunction with all the critical elements of (4) to (10);
12) Follow-up massively and follow-through consistently your systematic game plan;
13) Put in your sweat equity of intense effort and focused execution;
14) Stay focused on your strategic objectives, but remain flexible in your tactical execution;
You aren’t so stressed and nervous when you are practicing LeetCode, because your career doesn’t depend on how well you do while solving LeetCode.
When solving LeetCode, you aren’t expected to talk to the interviewer to get clarifications on the problem statement or input format. You aren’t expected to get hints and guidance from the interviewer, and to be able to pick them up. You aren’t expected to be able to communicate with other human beings in general, and to be able to talk about technical details of your solution in particular. You aren’t expected to be able to prove and explain your idea in clear, structured way. You aren’t expected to know how to test your solution, how to scale it, or how to adjust it to some unexpected additional constraints or changes. You may not be able to simply get constraints on input size and use them to figure out what is the complexity of expected solution. You have limited amount of time, so if you slowly got through most of the LeetCode, you may still struggle to get stuff done in 45 minutes. And many more… For all these things, you don’t need them to solve LeetCode, so you usually don’t practice them by solving LeetCode; you may not even know that you need to improve something there.
To sum it up: two main reasons are:
Higher stakes.
Lack of skills that are required at typical Google/Facebook interview, but not covered by solving LeetCode problems on your own.
You should also keep in mind that LeetCode isn’t the list of problems being asked at Google or Facebook interviews. If anything, it is more of a list of problems that you aren’t going to be asked, because companies ban leaked questions 🙂 You may get a question that is surprisingly different from what you did at LeetCode.
Originally Answered: I failed all technical interviews at Facebook, Google, Microsoft, Amazon and Apple. Should I give up the big companies and try some small startups?
Wanted to go Anonymous for obvious reasons.
Reality is stranger than Fiction.
In 2010: After graduation, I was interviewed by one of the companies mentioned above for an entry level Software Engineering Role. During the interview, the person tells me: ‘You can never be a Software Engineer’. Seriously? Of-course I didn’t get hired.
In 2013: I interviewed again with the same company but for a different department and got hired.
Fast Forward to 2016 Dec: I received 2 promotions since 2013 and now I am above the grade level of the guy who interviewed me. I remember the date, Dec 14 2016, I went to his desk and asked him to go out for a coffee. Initially he didn’t recognize me but later he did and we went out for a coffee. Needless to say, he was apologetic for his behavior.
For me, it felt REALLY GOOD. Its a story I’ll tell my Grandkids! 🙂
Big tech interviews at FAANG companies are intended to determine – as much as possible – whether you’ve got the knowledge and attributes to be a successful employee. A big part of that for software developers is familiarity with a good set of data structures and algorithms. Interview loops vary, but a good working knowledge of common algorithms will almost always come in handy for both interviews and the job.
Algorithm-related to questions I was asked in my first five years, or that I ask people with less than 5 years: sorting, searching, applying hashes correctly, mapping, medians and averages, trees, linked lists, traveling salesman (I was asked this a couple times, never asked it), and many more.
I never recommend an exhaustive months-long review before an interview, but it’s always a good idea to make sure you’re current on your basics: hash tables and sets, string operations, working with arrays and vectors and lists, binary trees, and linked lists.
Compared to other modern languages, python has two features that make it attractive, and then also make learning a second language difficult if you started with python. The first is that, despite some minor steps to allow annotation, python is loosely and dynamically typed. The second is that python provides a lot of syntactic sugar; this is shorthand, like a map function, where you can apply a function to each element in a data structure.
Do these features make it harder to switch to another language that is strongly and statically typed? For some people, yes, and for others, no.
Some programmers are naturally curious what’s happening under the hood. How are data being represented and manipulated? Why does an operation produce one type of result in one situation, and another type of result in another situation? If you are the kind of person who asks these questions, you are more likely to have an easier time transitioning. If you are a person who finds these questions uninteresting or even distasteful, transitioning to another language can be very painful.
I have excellent skills and experience on my resume, which makes it stand out.
Seriously, there is no magical spell that will make a crappy resume attractive to recruiters. Most people give up believing in magic after they are 5 or 6 years old. A software engineer who believes in magic is not a good candidate for hire.
All those complaints you have about their products? The people working there complain about the same exact things. Microsoft employees complain about how slow Outlook is. Google employees complain about everything changing all the time. Salesforce employees complain about how hard our products are to use.
So why don’t we do something about it? There are a few possible answers:
We are actively doing something about it right now and it will be fixed soon.
The problem is technically difficult to fix. For example, it’s currently beyond the state of the art to change the wake word (“Alexa”/”OK Google”) to a user-selected word. A variation of this is the problem that’s more expensive to fix than the amount of annoyance saved.
The team responsible for that functionality has problems. Maybe they have a bad manager or have been reorged a lot, and as a result they haven’t been doing a good job. Even once the problem is solved, it can take a long time to catch up.
The problem is related to making money. For example, Microsoft used to have a million different versions of Office, each including different programs and license restrictions. It was super confusing. But the bean counters knew how much extra money the company made from these bundles, compared to a simpler scheme, and it was a lot. So the confusion stayed.
The problem is cultural. For example, Google historically made its reputation by offering new features constantly. Everything about the culture was geared towards change and innovation. When they started making enterprise products, that cultural became baggage.
But none of that keeps the employees from complaining.
That’s perhaps the first stage of learning, recitation.
Using the four-stage model of learning that goes
Unconscious Incompetence
Conscious Incompetence
Conscious Competence
Unconscious Competence
that’s maybe a 2 to 2.5 there. You know you haven’t really understood why you are doing things that way and without detailed step-by-step, you don’t yet know how you would design those solutions.
You need to step back a bit, by reviewing some working solutions and then using those as examples of fundamentals. That might mean observing that there is a for() loop, for example – why? What is it there for? How does it work? What would happen if you changed it? If you wanted to use a for loop to write out “hello!” 8 times, how would you code that?
As you build up the knowledge of these fundamental steps, you’ll be able to see why they were strung together the way they were.
Next, practice solving smaller challenges. Use each of these tiny steps to create a solution – one where you understand why you chose the pieces you chose, what part of the problem it solves and how.
Early 2020 has been a very rough period for many companies who laid off tons of good people, many of which have bounced to a company who was not a good fit and eventually went to a third one. Forced remote work was also difficult for many folks. So in the current context, having changed 3 jobs in the last 4 years is really a non-event.
Now more generally, would my hiring recommendation be influenced by a candidate having changed jobs several times in a short period of time?
The assumption here is that if a candidate has switched jobs 3 times in 4 years, there must be something wrong.
I think this is a very dangerous assumption. There are lots of things that cause people to change jobs, sometimes choice, sometimes circumstances, and they don’t necessarily indicate anything wrong in the candidate. However, what could be wrong in a candidate can be assessed in the interview, such as:
is the candidate respectful? Is the candidate able to disagree consrtuctively?
does the candidate collaborate?
Does the candidate naturally support others?
Has the candidate experience navigating difficult human situations?
etc, etc.
There are a lot of signals we can detect in the interview and we can act upon them. Everything that comes outside of the interview / outside of reference check is just bias and should be ignored.
My IQ was around 145 the last time I checked (I’m 19).
I feel lots of gratitude for my ability to deeply understand and comprehend ideas and concepts, but it has definitely had its “downsides” throughout my life. I tend to think very deeply about things that I find interesting and this overwhelming desire to understand the world has led me to some dark places. When I was around 9 or 10, I discovered the feeling of existential panic. I had watched an astronomy documentary with my father (who is a geoscience professor) and was completely overwhelmed with the fact that I was living on an unprotected orb, orbiting around a star at speeds far faster than I could even comprehend. I don’t think anyone in my family expected me to really grasp what the documentary was saying so they were a bit alarmed when I spent that whole night and most of the next week panicking and hyperventilating in my bedroom.
I lost my mom to suicide when I was 11 which sent me into a deep depression for several years. I found myself thinking a lot about death and the meaning of human existence in my earlier teenage years. I was really unmotivated to do school work all throughout high school because I found no meaning in it. I didn’t understand why I was alive, or what being alive meant, or if there even was any true meaning to life. I constantly struggled to see how any of it truly mattered in the long run. What was the point of going to the grocery store or hanging out with my friends or getting a drivers license? I was an overdeveloped primate forced to live in and contribute to a social group that I didn’t ask to be in. I was living in a strange universe that made no sense and I was being expected to sit at a desk for 8 hours every day? Surrounded by people who didn’t care about anything except clothing and football games? No way man, count me out. I spent a lot of nights just sitting in my bedroom wondering if anything I did really mattered. Death is inevitable and the whole universe will one day end, what’s the point. I frequently wondered if non-existence was inherently better than existence because of all of the suffering that goes hand in hand with being a conscious being. I didn’t understand how anyone could enjoy playing along in this complex game if they knew they were all going to die eventually.
Heavy stuff, yeah.
When I was 18 I suddenly experienced what some people label as an “ego death” or a “spiritual awakening” in which it suddenly occurred to me that the inevitably of death doesn’t mean that life itself is inherently meaningless. I realized that all of my actions affect the universe and I have the ability to set off chain reactions that will continue to alter the world long after I’m gone. I also realized that even if life is inherently meaningless, then that is all the more reason to enjoy being alive and to experience the beauty and wonder of the world while I’m still around. After that day I began meditating daily to achieve a deeper awareness of myself and try to find inner peace. I began living for the experience of being alive and nothing else. All of this has brought me great peace and has allowed me to enjoy learning again. For so long learning was terrifying to me because it meant that I was going understand new information that could potentially terrify me. Information that I could not unlearn. I have become a very emotionally sensitive person after the death of my mother, so I simply could not handle the weight of learning about existential concepts for a while. Now that I’ve been able to find a state of peace within myself and radically accept the fact that I will die one day (and that I do not know what occurs after death) I have begun to enjoy learning again! I read a lot of nonfiction and fiction alike. I enjoy traveling and seeing the world from as many different perspectives as possible. Talking to new people and attempting to see my world through their eyes is very enjoyable for me. Picking up new skills is generally very easy for me and I spend a lot of my free time pondering philosophical issues, just because it’s fun for me. I’m not a very social person, I like having a few close friends, but I mostly enjoy being alone.
So all in all, I think having an IQ of 140+ is a very turbulent experience that can be very beautiful! When you are able to truly understand deep concepts, it can seriously freak you out, especially when you’re searching for meaning and answers to philosophical problems. If I hadn’t embraced a way of life that revolves around radically acceptance, I don’t think I would have the guts to look as deeply into some things as I do. However, since I do have that safety cushion, I’m able to shape my perception of the world with the knowledge that I learn. This allows me to see incredible beauty in our world and not take things too personally. When I have a rough day, all I need to do is sit on my roof for half an hour and look at the stars. It reminds me that I am a very small animal in a very big place that I know very little about. It really puts all of my silly human problems in perspective.
If you can explain to me how “no-code is the future”, maybe there’s a useful response to this.
As far as I can tell, “no-code” means that somebody already coded a generic solution and the “no-code” part is just adapting the generic solution for a specific problem.
Somebody had to code the generic solution.
As to the second part, “is a CS major even worth it?” I’ve had a 30+ year career in software engineering, and I didn’t major in CS. That hasn’t kept me from learning CS concepts, it hasn’t kept me from delivering good software, and it hasn’t stopped me from getting software jobs.
Is a CS major even worth it? Only the student knows the answer to that.
People have written no-English versions of many programming languages – but they aren’t used as much as you’d think because it’s just not that useful.
Consider the C language – there are no such English words as “int”, “bool”, ”enum”, “struct”, “typedef”, “extern”, or “const”. The words “auto”, “float” and “char” are English words – but with completely different meanings to how they are used in C.
This is the complete list of C “reserved words” – things you’d have to essentially memorize if you’re a non-English speaker…
…but very few of those words are used in their usual English meanings…and you have to just know what things like “union” mean – even if you’re a native english speaker.
But if you really think there is an advantage to this being your native language then:
#define changer switch
#define compteur register
#define raccord union
…and so on – and now all of your reserved words are in French.
I don’t think it’s going to help much.
IT”S ABOUT LIBRARIES AND DOCUMENTATION:
The problem isn’t something like the C language – we could easily provide translations for the 30 or so reserved words in 50 languages and have a #pragma or a command to the compiler to tell it which language to use.
No problem – easy stuff.
However, libraries are a much bigger problem.
Consider OpenGL – it has 250 named function, and hundreds of #defined tokens.
glBindVertexArray would be glLierTableauDeSommets or something. Making versions of OpenGL for 50 languages would be a hell of a lot more painful.
Then, someone has to write documentation for all of that in all of those languages.
But a program written and compiled against French OpenGL wouldn’t link to a library written in English – which would be a total nightmare.
Worse still, I’ve worked on teams where there were a dozen US programmers, two dozen Russians and a half dozen Ukrainians – spread over two continents – all using their own languages ON THE SAME PIECE OF SOFTWARE.
Without some kind of control – we’d have a random mix of variable and function names in the three languages.
So the rule was WE PROGRAM IN ENGLISH.
But that didn’t stop people from writing comments and documentation in Russian or Ukranian.
SO WHAT IS THE SOLUTION?
I don’t think there actually is a good solution for this…picking one human language for programmers to converse in seems to be the best solution – and the one we have.
There are 1.3 billion English speakers, 1.1 billion Mandarin speakers, 600 million Hindi speakers, 450 Spanish speakers…and no other language gets over half of that.
So if you have to pick a single language to standardize on – it’s going to be English.
Those who argue that Mandarin should be the choice need to understand that typing Mandarin on any reasonable kind of keyboard was essentially impossible until 1976 (!!) by which time using English-based programming languages was standard. Too late!
SO – ENGLISH IT IS…KINDA.
Even though we seem to have settled on English the problems are not yet over.
British English or US English – or some other dialect?
As a graphics engineer, it took me the best part of a decade to break the habit of spelling “colour” rather than “color” – and although the programming languages out there don’t use that particular word – the OpenGL and Direct3D libraries do – and they use the US English spelling rather than the one that people from England use in “English”.
ARE PROGRAMMERS UNIQUE IN THIS?
No – we have people like airline pilots, ships’ captains.
ICAO (International Civil Aviation Organization), require all pilots to have attained ICAO “Level 4” English ability. In effect, this means that all pilots that fly international routes must speak, read, write, and understand English fluently.
However, that’s not what happened for ships. In 1983 a group of linguists and shipping experts created “Seaspeak”. Most words are still in English – but the grammar is entirely synthetic. In 1988, the International Maritime Organization (IMO) made Seaspeak the official language of the seas.
Here’s the thing. The compensation will never be comparable.
When you join a big tech, public company, all of your compensation is public. Also it’s relatively easy to get a fair estimate of what comp looks like a few years down the road.
When you join a private company, the comp is a bet on a successful exit.
In 2015, Zenefits was a super hot company. Zoom had been around for.4 years and was very confidential.
In a now infamous Quora question[1] a user asked wether they should take an offer at Zenefits or Uber. As a result, The Zenefits CEO rescinded their offer. But most people would have chosen an offer at Zenefits or Uber, whose IPO was the most anticipated back then, over one at Zoom.
And yet Zenefits failed spectacularly, Uber’s IPO was lackluster, while Zoom went beyond all expectations.
So this is mostly about to risk aversion. Going to a large co means a “golden resume” that will always get you interviews, so it has a lot of long term value.
Working in a large company has other benefits. Processes are usually much better and there’s a lot to learn. This is also the opportunity to work on some problems at a huge scale. No one has billions of users outside of Google, Meta, Apple or Microsoft.
But working in a small private company whose valuation explodes is the only way for a software engineer to become very wealthy. The thing is though that it’s impossible for an aspiring employee to tell which company is going to experience that growth versus fail.
The pro’s and con’s really depend on the specific situation.
(1) When quitting for a new position…
Pros:
Better pay & benefits
More promotion opportunities
New location
New challenges (old job may have been boring)
New job aligned to your interests.
Cons:
New job/company was seriously misrepresented
“New boss same as the old boss” (no company is perfect!)
You might have wanted a new challenge, but you are now over your head.
Note: if you have a job and are not desperate, please do your homework and remember you are also interviewing them! You want a better job in most cases (unless that moving thing is going on).
(2) When quitting over a conflict…
Pros:
Can sleep at night (providing it was a ethical issue and you were in the right)
You showed them who is the boss!
Plus, you wont be on the local news if they get sued, or the IRS does a audit.
Again, if it was a toxic environment that you get to live as opposed to a stroke on the job! No job is worth it that is impacting your health, including mental health.
Cons:
No unemployment in most states if you just up and quit.
Job search with no income puts a lot of pressure at some point to take any job
the good news though, is you can continue looking while earning a paycheck (and hopefully still growing skills & experience)
The reason so many people are quitting now…
Note there is a third category, when you quit due to a lifestyle change. In this case, we are looking a women quitting to be a full-time mother, or someone going back to school. A spouse getting promoted but with a move might also place the other mate in this position…
Pro:
You get to live the life you want.
You are preparing for a better career
Con:
Loss of income
Reduced social interaction (for the full-time mom)
Note here that most couples that decide to do the stay at home mom generally plan ahead so one income will cover their expenses.
Second, I also don’t consider serious health issues when you leave the work force in general to fall under the scope of this discussion.
Originally Answered: Is practicing 500 programming questions on LeetCode, HackerEarth, etc enough to prepare for Google interview?
If you have 6 months to prepare for the interview I would definitely suggest the following things assuming that you have a formal CS degree and/or you have software development experience in some company:
Step 1 (Books/Courses for good understanding)
Go through a good data structure or algorithms book and revise all the topics like hash tables, arrays and strings, trees, graphs, tries, bit hacks, stacks, queues, sorting, recursion, and dynamic programming. Some good books according to me are:
The Stanford Coursera algorithms courses are also very good and you can look at them if you have time. It’s a bit more theoretical though.
Step 2 (Programming practice for algorithms and data structures)
Once you are done with Step 1 you need a lot of practice. It need not be a set number of problems like 500 or 1000. The best way to practice problems is to mimic an interview setting and time yourself for half an hour and solve a problem without any distraction. The steps here are to read a problem, think of a brute force solution that works very quickly, and then think of an optimized version that works and then write clean working code and come up with test cases within half an hour. Most of the top companies ask you 1 or 2 medium problems or 1 hard problem in 45 mts to 1 hour. Once you are done solving the problem you can compare your solution with the actual solution and see if there is scope to improve your solution or learn from the actual solution.
If you do the math it takes half an hour to solve a problem and at least 15 mts to look and compare with the correct solution. So 500 problems take 500 * 45 mts = 375 hours. Even if you spend 5 solid hours a day for problem-solving it comes to 75 days (2.5 months). If you are in a full-time job it’s hard to spend so much time every single day. Realistically if you spend 2–3 hours a day we are talking about 5 months just for practicing 500 problems. In my opinion, you don’t need to solve so many problems to crack the interview. All you need is a few problems in each topic and understand the fundamentals really well. The different topics for algo and ds are:
arrays and strings, bit hacks, dynamic programming, graphs, hash tables, linked lists, math problems, priority queues, queues, recursion, sorting, stacks, trees, and tries. As a starter try to solve 4–5 problems in each topic after you finish step 1 and then if you have time solve 2–3 problems a day for fun in each topic and you should be good. Also, it is far better to solve 5 problems than to read 50 problems. In fact, trying to cover problems by reading problems is not going to be of any use.
Step 3 (this can be done in parallel with step 1) (Systems Design)
Practice problems in systems, design (distributed systems, concurrency, OO design). These questions are common in Google and other top companies. The best way to crack this section is to actually do complex systems projects at work or school projects. There are lots of resources online which are very good for preparation for this topic.
Edit: Since I have received some request to point some resources I am listing some of my favorite ones:
Please know your resume in and out and make sure you can explain all the projects mentioned in the resume. You should be able to dive as deep as needed (technically) for the projects mentioned. Also do enough research about the company you are interviewing, the product, engineering culture and have good questions to ask them
Step 5 (mock interviews)
Last but not least please make sure you have some good friends working in a good company or your classmate mock interview you. You also have several resources online for this service. Also, work on the feedback you get from the mock interview. You can also interview a few companies you are not interested to work as a practice interview before your goal companies.
It is possible for some people; I don’t know whether it is possible for you.
You’re solving 50% of easy problems. Reality check: that’s…cute. Your target success rate, to have a good chance, should be near-100% on Easy, 75% on Medium, and 50% on Hard. On top of that, non-Leetcode rounds like system design should be solid, too.
You can see there’s a big gap between where you are and where you need to be.
The good news is that despite how large that gap is, without a doubt, there have been cases of people being able to learn fast enough to cover that gap in 90 days. These cases are not at all common, and I will warn you that the vast majority of people who are where you are now cannot get to where you need to be in 90 days. So, the odds are against you, but you might be better than the odds would say.
What is special about the situations of the people who can get there that fast? Off the top of my head, the key factors are:
A strong previous background in CS and algorithms
Being able to spend a significant amount of time daily to study
High aptitude / talent / intelligence for learning these sorts of concepts
Having an effective methodology for learning. The fact that you’re actively solving problems on Leetcode is a decent start here.
If the above factors describe you, you might be better off than the odds would suggest. It is at least possible that you could achieve your goal.
(Note: I’ve interviewed hundreds of developers in my time at Facebook, Microsoft and now as the co-founder and CEO of Educative. I’ve also failed several coding interviews because I wasn’t prepared. At Educative, we’ve helped thousands of developers level up their careers with hands-on courses on programming languages, system design, and interview prep.)
Is Interview Prep a Full-time Job?
Let’s break it down. A full-time job – 40 hours per week, 52 weeks per year – encompasses 2080 hours. If you take two weeks of vacation, you’re actually working 2,000 hours. The 1,000 hours recommendation is saying you need six months of full-time work to prepare for your interview at a top tech company. Really?
I think three months is a reasonable timeframe to fully prepare. And if you’ve interviewed more recently, studying the specific process of the company where you’re applying can cut that time down to 4-6 weeks of dedicated prep.
I’ve written more about the ideal interview prep roadmap for DEV Community, but I’ll give you the breakdown here.
The “Secret” to a Successful Interview Prep Plan
First of all, I want to be clear that there’s no silver bullet to interview prep. But during my time interviewing candidates at Facebook and Microsoft, I noticed there was one trait that all the best candidates shared: they understood why companies asked the questions they did.
The key to a successful interview prep program is to understand what each question is actually trying to accomplish. Understanding the intent behind every step of the interview process helps you prepare in the right way.
A lot of younger developers think they need to be experts in a few programming languages, or even just one language in order to crack the developer interview. Writing efficient code is a crucial skill, but what software companies are actually looking for (especially the big ones with custom libraries and technology stacks that you will be expected to learn anyway) is an understanding of the various components of engineering, as well as your creative problem-solving ability.
That breaks down into five key areas that “Big Tech” companies are focused on in the interview process:
1. Coding
Interviewers are testing the basics of your ability to code. What language should you be using? Start with the language you know best. Especially in larger companies, new syntaxes can be taught or libraries used if you establish you can execute well. I have interviewed people that used programming languages that I barely know myself. I know C++ inside and out, so even though Python is a more efficient language, I would always personally choose to interview using C++. The most important thing is just to brush up on the basics of your favorite programming language.
The questions in coding interviews focus on generic problem-solving, data structures (Mastering Data Structures: An interview refresher), and algorithms. So revisit concepts that you haven’t touched since undergrad to have a fresh, foundational understanding of topics like complexity analysis (Algorithms and Complexity Analysis: An interview refresher), arrays, queues, trees, tries, hash tables, sorting, and searching. Then practice solving problems using these concepts in the programming language you have chosen.
Whether you’re building a mobile app or web-scale systems, it’s important to understand threads, locks, synchronization, and multi-threading. These concepts are some of the most challenging and factor heavily into your “hiring level” at many organizations. The more expert you are at concurrency, the higher your level, and the better the pay.
Since you’ve already determined the language you’re using in (1), study up on process handling using that same language. Prepare for an interview – Concurrency
3. System Design
Like concurrency problems, system design is now key to the hiring process at most companies, and has an impact on your hiring level.
There isn’t a clear-cut answer to an open-ended question where a candidate must work their way to an efficient, meaningful solution to a general problem with multiple parts.
Most candidates don’t have a background designing large-scale systems in the first place, as reaching that level is several years into a career path and most systems are designed collaboratively anyway.
For this reason, it is important to spend time clarifying the product and system scope, a quick back-of-the-envelop estimation, defining APIs to address each feature in the system scope and defining the data model. Once this foundational work is done, you can take the data model and features to actually design the system.
In Object-Oriented Design questions, interviewers are looking for your understanding of design patterns and your ability to transform the requirements into comprehensible classes. You spend most of your time explaining the various components, their interfaces and how different components interact with each other using the interfaces. Interviewers are looking for your ability to identify patterns and to apply effective, time-tested solutions rather than re-inventing the wheel. In a way, it is the partner of the system design interview.
This is the one that doesn’t have a clear cut learning path, and because of that, it is often overlooked by developers. But for established companies like Google and Amazon, culture is one of the biggest factors. The skills you demonstrate in coding and design interviews prove that you know programming. But without the right attitude, are you open to learning? Are you passionate about the product and want to build things with the team? If not, companies can think you’re not worth hiring. No organization wants to create a toxic work environment.
Since every company has a few different distinguishing features in their culture, it’s important to read up on what their values and products are (Coding Interview Preparation | Codinginterview has information on many top tech companies, including Google and Facebook). Then enter the interview track ready to answer these basics:
Interest in the product, and demonstrate understanding of the business. (Don’t mistake Facebook’s business model, which relies on big data, for AWS or Azure, which facilitate big data as a service. If you’re going into Google, know how user data and personalization is the core of Google’s monetization for its various products and services, while knowing what makes Android unique compared to iOS. Be an advocate.)
Be prepared to talk about disagreements in the workplace. If you’ve been working for more than a few years, you’ve had disagreements. Even if you’re coming out of school, group projects apply. Companies want to know how you work on a team and navigate conflict.
Talk about how the company helps you build and execute your own goals both as a technologist and in your career. What are you passionate about?
Talk about significant engineering accomplishments – what have you built; what crazy/difficult bugs have you solved?
Conclusion
Strategic interview prep is essential if you want to present yourself as the best candidate for an engineering role.
It doesn’t have to take 1,000 hours, nor should it – but at big companies like Google and Facebook where the interview process is so intentional, it will absolutely benefit you to study that process and fully understand the why behind each step.
There are plenty of battle-tested resources linked in my answer that will guide you throughout the prep process, and I hope they can be helpful to you on your career journey.
Originally Answered: I have practiced over 300 algorithms questions on LintCode and LeetCode but still can’t get any offer, what should I do?
I have interviewed and been interviewed a number of times, and I have found out that most of the time people (including myself) flunk an interview due to the following reasons:
Failing to come up with a solution to a problem: If you can’t come up with even one single solution to a problem, then it’s definitely a red flag since that reflects poorly on your problem solving skills. Also, don’t be afraid to provide a non-optimal solution initially. A non-optimal solution is better than no solution at all.
Coming up with solutions but can’t implement them: That means you need to work more on your implementation skills. Write lots and lots of code, and make sure you use a whiteboard or pen and paper to mimic the interview experience as much as possible. In an interview you won’t have an IDE with autocomplete and syntax highlighting to help you. Also make sure that you’re very comfortable in your programming language of choice.
Solving the problem but not optimally: That could mean that you’re missing some fundamental knowledge of data structures and algorithms, so make sure that you know your basics well.
Solving the problem but after a long time, or after receiving too many hints: Again, you need more problem solving practice.
Solving the problem but with many bugs: You need to properly test your code after writing it. Don’t wait for the interviewer to point out the bugs for you. You wouldn’t want to hire someone who doesn’t test their code, right?
Failing to ask the interviewer enough questions before diving into the code: Diving right into the code without asking the interviewer enough questions is definitely a red flag, even if you came up with a good solution. It tells the interviewer that either you’re arrogant, or that you’re reckless. It’s also not in your favor, because you may end up solving the wrong problem. Discussing the problem and asking questions to the interviewer is important because it ensures that both of you are on the same page. The interviewer’s answers to your questions may also provide with some very useful hints that may greatly simplify the problem.
Being arrogant: If you’re perceived as arrogant, no one will want to hire you no matter how good you are.
Lying on the resume: Falsely claiming knowledge of something, or lying about employment history is a huge red flag. It shows dishonesty, and no one wants to work with someone who is dishonest.
I hope this helps, and good luck with your future interviews.
Unless we’re talking about Google, which has problems that are unique to them in comparison to the rest, you can be sure that big tech companies ask LeetCode-style questions quite often. Seeing LeetCode Hard problems specifically, however, is not that common in these interviews, and it’s more likely that you’ll be facing LeetCode Medium questions and one or two Hard questions at best. This is because having a time limit to solve them as well as an interviewer right beside you already adds enough pressure to make these questions feel harder than they normally would be; increasing their difficulty would simply be detrimental to the interviewing process.
I suggest that you avoid using the difficulty of LeetCode questions that you can solve as a way of telling if you’re prepared for your interviews as well because it can be pretty misleading. One reason this is the case is that LeetCode’s environment is different from an interviewing environment; LeetCode cares more about running time and the optimal solution to a problem, while an interviewer cares more about your approach to the question (an intuitive solution can always be optimized further with a discussion between you and the interviewer).
Another reason you should avoid worrying too much about LeetCode-style questions is that FAANG companies are starting to refrain from asking them, as they’re noticing that many candidates come to their interviews already knowing the answer to some of their questions; currently, if your interviewer notices that you already know the answer to the question you’re given, they won’t take it into account and instead will move on to another question, as already knowing how to solve the problem tells them nothing about the way you approach challenging situations in the first place.
Also, you should consider that LeetCode only lets you practice what you already know in coding; if you don’t have a good knowledge of data structures & algorithms beforehand, LeetCode will be a difficult resource to use efficiently, and it also won’t teach you anything about important non-technical skills like communication skills, which is a crucial aspect that interviewers also evaluate. Therefore, I also suggest that you avoid using LeetCode as your only resource to prepare for your technical interviews, as it doesn’t cover everything that you need to learn on its own.
For example, you may want to enroll in a program like Tech Interview Proas you use LeetCode. TIP is a program that was created by an ex-Google software engineer and was designed to be a “how to get into big tech” course, with over 20 hours of instructional video content on data structures & algorithms and system design.
Another good resource that you could use, this time to cover the behavioral aspect of interviews, is Interviewing.io. With it, you can engage in mock interviews with other software engineers that have worked with Facebook and Google before and also receive feedback on your performance.
You could also read a book like Cracking the Coding Interview, which offers plenty of programming questions that are very similar to what you can expect from FAANG companies, as well as valuable insight into the interviewing process.
Harvard is seen in popular culture as being very selective, and so any funnel which has a conversion rate lower than 5% is going to describe itself as “more selective than Harvard”. “More selective than Harvard” has 70m hits on Google. When Walmart opened a DC store, it hired about 2.5% of the people that sent applications, and ran a story that it was “twice as selective as Harvard”. Tech internships, somewhat unsurprisingly, are harder to get as jobs at Walmart.
Generally speaking, the more LeetCode problems you solve, the better your odds of getting an offer will be. Be careful, however, as using the number of problems you solve on LeetCode as a reference for how ready you are for your technical interviews is misleading, especially if it’s for Google and Facebook. Even if you solve every problem on LeetCode (please don’t try this), there’s still a chance you won’t get an offer, and there are several reasons why.
First of all, coding is not the only thing taken into consideration by interviewers from big tech companies. One of the main things they look for in a candidate is the presence of strong soft skills like teamwork, leadership, and communication. If you’re raising red flags in that department—if the interviewer doesn’t think you have the leadership skills to lead a team down the road, for example—odds are that you’re going to get overlooked. They also expect you’ll be able to clearly explain your thought process before solving a given coding problem, which is something a surprising number of developers have trouble with.
The second problem with using LeetCode alone is that it can only help you practice data structures & algorithms and system design, but not exactly teach you about them. This might not be an issue if you’re solving questions from the Easy section of LeetCode, but once you get to the Medium and Hard problem sets, you’ll need more theoretical knowledge to properly handle these problems.
So, ideally, you’ll want to prepare using resources that help you learn more about DS&A and systems design before you start practicing on LeetCode, and you’ll also want to work on your behavioral skills to ensure you do well there, too. Here are some tools that can help:
Interviewing.io: A site where you can engage in mock interviews with other software engineers—some of whom have worked at Google and Facebook—and receive immediate, objective feedback on your performance.
Tech Interview Pro: An interview prep program designed by a former Google software engineer that includes 150+ instructional video lessons on data structures & algorithms, systems design, and the interview process as a whole. TIP members also get access to a private Facebook group of 1,500+ course graduates who’ve used what they learned in the course to land jobs at Google, Facebook, and other big tech companies.
So, using LeetCode on its own would prepare you well for questions about data structures & algorithms, but may leave you unprepared for questions related to systems design and the behavioral aspect of your interviews. But by complementing LeetCode with other resources, you’ll put yourself in a much better position to receive an offer from Google, Facebook, or anyone else. Best of luck.
Dmitry Aliev is correct that this was introduced into the language before references.
I’ll take this question as an excuse to add a bit more color to this.
C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this pointer parameter to member functions. E.g.:
struct S {
int f();
};
was translated to something like:
int f__1S(S *this);
(the funny name f__1S is just an example of a possible “mangling” of the name of S::f, which allows traditional linkers to deal with the richer naming environment of C++).
What might comes as a surprise to the modern C++ programmer is that in that model this is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:
struct S {
int n;
S(S *other) {
this = other; // Possible in C with Classes.
this->n = 42; // Same as: other->n = 42;
}
};
Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:
struct S {
S() {
this = my_allocator(sizeof(S));
…
}
~S() {
my_deallocator(this);
this = 0; // Disabled normal destructor post-processing.
}
…
};
That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this remained valid in constructors and destructors even though this had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.
When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:
struct S {
int n;
int f() const {
return this->n;
}
} s = { 42 };
int r = s.f();
is specified to be approximately like:
struct S { int n; } s = { 42 };
int f__1S(S const &__this) {
return (&__this)->n;
}
int r = f__1S(s);
In other words, the expression this is now effectively a kind of alias for &__this, where __this is just a name I made up for an unnamable implicit reference parameter.
C++11 further tweaked this by introducing syntax to control the kind of reference that this is bound from. E.g.,
struct S {
int f() const &;
int g() &&;
};
can be thought of as introducing hidden parameters as follows:
int f__1S(S const &__this);
int g__1S(S &&__this);
That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this:
struct S {
int n;
int f() {
auto lm = [this]{ return this->n; };
return lm();
}
};
After that language feature was released, we started getting many reports of buggy programs that “captured” this thinking they captured the class value, when instead they really wanted to capture __this (or *this). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:
we introduced the ability to capture *this
we allowed [=, this] since now [this] is really a “by reference” capture of *this
even though [this] was now a “by reference” capture, we left in the ability to write [&, this], despite it being redundant (compatibility with earlier standards)
Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):
struct less_than {
template <typename T, typename U>
bool operator()(this less_than self,
T const& lhs, U const& rhs) {
return lhs < rhs;
}
};
In that example, the “object parameter” (i.e., the previously hidden reference parameter __this) is now an explicit parameter and it is no longer a reference!
Here is another example (also from the paper):
struct X {
template <typename Self>
void foo(this Self&&, int);
};
struct D: X {};
void ex(X& x, D& d) {
x.foo(1); // Self=X&
move(x).foo(2); // Self=X
d.foo(3); // Self=D&
}
Here:
the type of the object parameter is a deducible template-dependent type
the deduction actually allows a derived type to be found
This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.
When an employee is hired, there is a step in the process where they are given a stack of documents to sign that (anecdotally) I’ll venture maybe 1 in 1,000 actually read. One of the least understood (or read) is the notice that the company controls, collects and analyzes all communications, internet activity and data stored on company-owned or -managed devices and systems.
This includes network traffic that flows across their servers. It’s safe to assume that mid-to-large employers are fully aware of the amount of on-the-clock time employees spend shopping, tweeting or watching YouTube, and know which employees are spending inordinate amounts of ‘company time’ shopping on Amazon rather than tackling assignments.
This also include Bring Your Own Device policies— where employees are allowed to use their personal smartphone, tablet or laptop for business purposes. Companies don’t always ‘exploit’ the policy for nefarious surveillance purposes, but employers are within their rights to collect information like location data from your BYOD smartphone both on and off the clock.
An example of where this can hurt employees is when they start to look for another job.
If you email/Slack/message your supervisor and ask for a personal day off to attend to a family matter, but your device logs show you are accessing job-search sites and your location data suggests your aren’t at home or even within the radius of a competitor’s office, they know. This tends to make your boss cranky, and can adversely impact your employment to the point of losing your job.
I disagree with this kind of intrusive surveillance, and the presumption of guilt employees face when they take steps to protect themselves by using encrypted tools like Signal, proxy servers or switching devices to Airplane Mode intrudes on the employee’s legitimate rights to privacy: you may not want your employer to know that you’re seeing a psychiatrist on your lunch hour, and they really have no reasonable expectation for you to disclose this (or not take steps to conceal it.)
I think so. I remember there was a noticeable number of people going to Facebook, and some discussion of it among the employees. And then there was an explicit event where Google rearranged its compensation strategy. Everyone got a huge raise just at that moment, and from that point on the salaries and stock grants became close to the top of the market, as they need to be for a company that hires top talent.
If you can’t get FAANG to pay attention to you, you probably need to get another job first. Perhaps one of the companies that are considered to be pretty good would be interested.
It is actually quite hard to get an entry-level role at a top tech company, because where you went to college (and internships, which you don’t have) plays a disproportionate role. It’s not surprising, because what else can they go on? Interviewing is expensive, and there are hundreds of applicants per opening, so they want to pre-filter candidates somehow.
Once you have a few years of experience, things look a little better, especially if you climb up the prestige pole. For instance, Microsoft (or Twitter where I work today) isn’t FAANG, but you can be sure that recruiters would take applicants from there seriously, and you would have a good chance to get an interview. But the main factor is what you manage to do in your time at work. If you do well, get promoted, demonstrate clear impact (that you can articulate externally), build your professional network, that would improve your chances to both get your foot in the door, and also to pass the interviews.
There are also other things you can do, but I think they depend on luck too much. Slowly improving your portfolio is the way to go, I think.
All of these companies assume that if you know the front-end domain, you can learn whatever technology du jour to become a front-end developer, and besides, if you don’t know anything about front-end, you can still grow into a front-end developer if that’s the path you’re interested in.
That being said, TypeScript is increasingly becoming the standard way to write client-side web code. Both Microsoft and Google are very committed to TS, while Facebook uses JavaScript with Flow. Google also uses Dart for some of its front end.
Likewise, there are a number of technologies on which the larger companies have taken diverging choices. Google is very committed to gRPC, I mean, g stands for Google; while Facebook is behind graphQL. (graph being, originally. the “social graph” of Facebook). AFAIK, Microsoft uses both.
Neither Google nor Facebook have ever really embraced node.js. This would have seemed odd a few years ago but now the web ecosystem is generally turning away from tools and web servers written in node.js. I don’t know for sure what Microsoft uses for its web servers.
Facebook is unsurprisingly very committed to React and React Native. Google though uses a number of web frameworks, including non-open sourced ones, and among others Angular and Flutter. Microsoft, AFAIK, uses React and React Native and Angular.
But all these skills are transferable. If you understand React, it’s easy to learn Angular and conversely; TypeScript and Flow have similarities, etc.
One common denominator is HTML, CSS, web APIs and web standards, which are always relevant.
Your goal, in an interview, is not to impress your interviewer, but to demonstrate that you have the necessary skill set to be hired.
In a large tech company, the threshold to be considered “impressive” is pretty high… you have people that had superlative achievements in their field (or outside of tech), and in their day to day they’re just treated like normal people. I never interviewed for Amazon, but I interviewed (and got hired) at both Facebook and Google, and both of my interviewer brackets included folks who had their own Wikipedia entry (and since then, all of my Facebook interviewers had amazing careers and most got their own Wikipedia page). So that’s the caliber of folks that your interviewers work with on a daily basis.
So your interviewer is not going to be impressed by your interview performance. That said, I’ve observed that many tech employees treat others as if they could be the next Ada Lovelace or the next Steve Jobs no matter their current achievements. This is not forced, but it’s an attitude that comes naturally because we’ve observed so many people achieve greatness. Interviewers would love nothing more than to give the highest recommendation for the candidate that they are seeing right now, it’s very fulfilling (conversely, having to reject a candidate is always a bit frustrating). So I think it’s fair that your interviewer is hoping you can become a superstar, but that hope is the same as for every other candidate and not directly linked to how well you are doing right now.
Google’s interview process leans towards making sure that an unsuitable candidate is not hired, they are ok if a few suitable candidates are missed in the process.
There is also a factor of chance involved in the process. Here is a story to prove that:
I have personally asked at least 5 engineers at Google if they would be willing to interview again assuming they would be offered 1.5 times their current compensation. Obviously they loose the job if they don’t clear the interview. I am yet to meet somebody willing to take this bargain , I wont take it either.
Btw google also offers anybody who leaves google to comeback and join at the same level without an interview if they comeback within 2 years. My guess is that they also realize the chance involved.
Not clearing an interview at google is an indicator of only one thing, that you did not clear a google interview. Don’t draw conclusions about your ability based on this.
At Google there’s a selection of laptops you can choose from: a couple of Macs, a couple of Chromebooks, a couple of Linux laptops and a couple of windows laptops. Usually there’s a smaller, lighter version, for people who favor portability, and a larger version if you prefer a larger screen.
I’ve seen developers use all. I’d guess that Macs are most common (but under 50%} and Windows machines are least common.
I use a Chromebook (well, two Chromebooks). You turn it on, you log in and it looks exactly the same as your other Chromebook. This saves me carrying a laptop between work and home. If you work from another office, you don’t need to carry your laptop, you just grab one off the shelf, log in, and it looks the same as the computer you left at home.
(I tried using a Mac, I couldn’t get used to it, I didn’t know how to do anything, the keyboard shortcuts drove me crazy and so I gave it back and got a Chromebook).
Google and Meta (formerly Facebook) have a long-standing culture where employees believe that they’re hot stuff and that the company has to keep them happy because the company needs them as much as they need the company. Amazon doesn’t have that, probably because they fire people pretty often, making many of the remaining employees feel disposable.
Google and Meta have different concepts of culture fit—or at least they did historically. At Google, culture fit means “don’t be a person who’s hard to work with”. At Meta, culture fit means “be a person who believes that we are doing great things here and who will be excited to work hard on those great things”. As a result, it tends to be easy for Meta to keep convincing their existing employees that the company is doing the right thing. Google, on the other hand, ends up with a significant proportion of employees who are not easily convinced, and demand change.
Though it’s been so long since I’ve actually worked in the tech industry that I’m not sure if Meta still fits the description I gave above, and there are signs that Google has been trending away from the description I gave above.
The question was:
Why is employee activism seen more in Google but not in other companies like Facebook and Amazon?
Just to add a small note to Dimitriy’s great answer, computer science PhDs tend to be analytical and hyperrational. Working for Google is probably the single best “pass” to choosing whatever the hell you want for the rest of your career, or at least for the next step or two. I think some CS PhDs work for Google not because it’s what they want, but because they don’t know what they want, and if you don’t know what you want and you can get a job there, it would be hard to do better than Google. Why not make $250,000 a year while figuring out your next step? The other companies in this so-called “top-tier” have issues; they are potentially great employers, but their issues make them anywhere from slightly to dramatically less attractive.
The main factor why top prop trading firms and hedge funds are difficult to get into compared to tech companies is their size.
According to Wikipedia Two Sigma has about 1600 employees[1] and Jane Street has about 1900 employees .[2] Even the largest hedge fund, Bridgewater, only has 1500[3] and the third largest hedge fund, Renaissance Technology manages $130 billion with 310 employees.
Maybe these numbers on Wikipedia aren’t exact but I’d bet they’re well within the ballpark of being accurate.
Facebook has nearly 60,000 employees ,[4] Amazon has 160,000 ,[5] Apple has 154,000,[6] Netflix has around 12,000[7], and Google has 140,000[8]. Again, maybe these number aren’t precise but I don’t feel like doing more in depth research.
However, it’s pretty obvious to see that the big tech companies employ multiples of what those finance firms do and quite simply there are far more opportunities at those tech companies. More seats mean it’s going to be less competitive to be hired.
Second, those top hedge funds and prop trading firms pay well. Like really well.
And Jane Street’s 2020 graduate hires straight from college were paid a $200k annual base salary, plus a $100k sign-on bonus, plus a $100k-$150k guaranteed performance bonus. Junior bankers’ high salaries look a little paltry by comparison.[9]
So a new college grad makes $400-$450k. That’s a 22–23 year old making that. That same article found documents that said the average per employee in their London office was $1.3 million. Some make more and some make less, but that’s an eye wateringly high number when you consider all of the admin and support aren’t making close to that.
A friend’s younger brother worked at Jane Street about 10 years ago. He may still but I haven’t talked to her much since we moved. He was a rock star at Jane Street, and while I’m relying on my memory of a 10 year old conversation so I may not be totally accurate, he was in his late 20’s or early 30’s and made $4 million (and it may actually have been $8M) that year.
I know tech people are paid well but I doubt many, if any, make $400-$450k in year one and are making millions by their late 20’s is unheard of unless they founded or join a startup at the right time.
In addition, the interview processes at those firms is insanely difficult. I’ve never worked or interviewed at them but I’ve heard war stories. Just to get your foot in the door is nearly impossible then getting an offer to work there is basically impossible
My friend’s brother was half way through an absolutely top PhD program in Physics when he was recruited by them. I don’t consider myself a slouch and I’ve met a ton of highly intelligent people, but this guy was like his brain was plugged into a computer and the internet. And he was a dynamic personality.
They hire the absolute best of the best and because they’re small and privately held they don’t actually ever need to hire or grow because the public markets can’t punish their stock price because they don’t have one. If some of those top investment firms can’t find the right fit they may simply not need to make a hire right then and can wait. They’re not big banks like Goldman that need to hire X number of analysts and associates because they need to replace the people who left.
So the main reasons that it’s tougher to get into a top hedge fund or prop trading firm than big tech is because they’re much smaller, they pay more, they are even more diligent in their hiring practices, and they hire very intelligent people.
If that were to happen, we’ll have bigger problems to deal with. The Google monorepo exists on tens of thousands of machines. That would mean: every data center, every workstation used by Google would suddenly be out of commission – not just turned off, but so that storage isn’t even available. This is only possible in a complete doomsday scenario.
It’s generally possible to find better compensated jobs for people with experience in big tech cos. This experience is very desirable for companies in fast growth mode – not just the technical expertise but also knowledge of processes of world-class engineering organizations. Smaller but fast-growing companies can offer better packages but with an element of risk – if the company ends up failing, the employee will only get their salary.
To Conclude:
The tech industry is booming, and there are a lot of great opportunities for those with the skills and experience to land a job at one of the FAANG companies. Google, Facebook, Amazon, Apple, Netflix, and Microsoft are all leaders in the tech industry, and they offer competitive salaries and benefits. The interview process for these companies can be intense, but if you’re prepared and knowledgeable about the company’s culture and values, you’ll have a good chance of landing the job. Perks at these companies can include free food and transportation, stock options, and generous vacation time. If you’re looking for a challenging and rewarding career in the tech industry, consider applying for a job at one of the FAANGM companies.
In this blog post, we’ll explore how AWS can help you manage your cloud computing resources more efficiently. AWS provides a range of…Continue reading on Dev Genius »
Na era da computação em nuvem, as soluções que oferecem flexibilidade, escalabilidade e eficiência destacam-se no mercado. O Elastic…Continue reading on Medium »
Com o avanço das tecnologias de contêiner, tornou-se evidente a necessidade de uma gestão e orquestração eficazes. O Elastic Kubernetes…Continue reading on Medium »
Na era digital atual, a capacidade de armazenar grandes quantidades de dados de forma segura e acessível é essencial. O Simple Storage…Continue reading on Medium »
À medida que as empresas migram cada vez mais suas cargas de trabalho para a nuvem, a necessidade de soluções de armazenamento confiáveis…Continue reading on Medium »
Allow reads and writes to and from your table in multiple regions around the world to remain highly available to your users.Continue reading on Success With AWS »
Google created the Android smartphone operating system. It was first published on September 23, 2008. Actually, Andy Rubin, Rich Miner…Continue reading on Medium »
I’ve just finished watching baby reindeer which completely captivated me from start to finished because the whole time I was just wondering where it was going, how it was going to end etc. Any recommendations for a good series or movie which is a mystery/thriller/where is this going to go? submitted by /u/shesateacher [link] [comments]
Konon, titik terendah seseorang (menurut media sosial) adalah menjual barang karena kebutuhan dengan dalih, “Nanti juga bisa kembali lagi…Continue reading on Medium »
In the world of data science and machine learning, access to powerful computing resources and collaborative tools can make all the…Continue reading on Medium »
Googlemcom, the omnipresent look motor we all depend on, is continually advancing. Unused highlights are included frequently, pointing to…Continue reading on Medium »
The Google Cloud Next 2024 edition took place from April 9 -11 in Las Vegas. As expected, it was packed with new announcements and…Continue reading on Medium »
I'm not even going waste your time like this movie did, yeah - I watched 2nd part just out of curioisity, idk maybe thats goal of Netflix but Rabel Moon is literally the worst sci-fi movie I have ever seen. It ispathetic how hard its trying to be Star Wars + Guardians of the Galaxy and part2 is even worse. The biggest cliche in the history of movies? What they were thinking? submitted by /u/Most_Ad_9716 [link] [comments]
Hi, Netflix tells me to "watch parts 1 and 2 now" of Rebel moon but I cannot actually watch part two. It plays only 50 second "trailer" no matter how many times I have tried. Any help? Edit: IDK why, but now its working. https://preview.redd.it/135r04o8tevc1.png?width=310&format=png&auto=webp&s=b5339be154c1db8ea7c16ba01fb7805d4b124211 submitted by /u/DavidBcz [link] [comments]
After about 30 minutes of watching them slow motion harvest and pretend that they had bonded over 3 days of in-movie time I gave up. Seriously, it’s bizarre. The villagers acting like they had formed a real connection with the NPC’s over such a short amount of time and dedicating so much time to harvesting and a post harvest party was such a bizarre choice. I really wanted this film to justify the first one and Snyder served served this tripe instead? Jesus, this shit was terrible. Was Snyder deliberately taking the piss to see how much manure he could get past Netflix management. Is it just some huge joke that Netflix subscribers weren’t in on? submitted by /u/letstalkaboutstuff79 [link] [comments]
I just checked my login history bc my hotmail acted weird. Noone managed to log in, it just disconnected itself regularly. So i checked my login history and for over a month now someone tries to log into my account from different servers around the world! Around 3 log in attemots daily from argentina, mexico arabis etc. Obviously they have my email adress and want to get in. And keep trying to do so. It always says they have entered the wrong password. So they just try to brute force it i guess? Is there something i can do? submitted by /u/Outside-Raisin793 [link] [comments]
TL:DR Microsoft took a product I own and wouldn’t refund me, how can I file a complaint ? Hey all, I bought 2016 Microsoft office in 2016, every year i format my computer and reinstall office and all is good. This year it didn’t work, i contact their agent through online chat, they tried to help but the couldn’t so they scheduled a call back with someone else who told me the product is obsolete and they can’t help me. I asked for a refund since its a one time purchase product and they essentially took it from me, he said no because I’ve deleted and reinstalled it multiple times, I said there’s nothing about that in the terms and conditions He said there is, I asked him to point it out he said sure and proceeded to leave me hanging for 10 minutes. When he came back he said I’ll try to help you (I assumed he made it up) All he did afterwards is leave me hanging for another 10 minutes then hang up. I looked for ways to file a complaint but Microsoft makes it impossible, their costumer service is incompetent they couldn’t tell me how to file a complaint. Anyone know how I can file a complaint ? I’m surprised it’s the most valuable company on the planet submitted by /u/RiyadhDogHunter [link] [comments]
As anticipation builds for Apple’s upcoming iPhone releases, analyst Ross Young has provided intriguing details about the potential…Continue reading on Medium »
If I remove someone from my Extra Member Slot, will they be notified by email or text message? They barely use it, so I definitely want them removed, but I'm unsure if they'd be notified, as it might cause some drama. I plan to replace them with someone else. submitted by /u/TrapsideCA [link] [comments]
https://preview.redd.it/d5jh3fzwudvc1.png?width=2240&format=png&auto=webp&s=bb66324f91309d1ddaba3f02547f3b61c8aa7c7a The Android company is now giving all Chromebook owners access to its AI-powered editing tools which are exclusively available to Pixel users or members with a monthly subscription; Google’s free to all Chromebooks effective date is May 15. Competitiveness entails features, such as Magical Eraser, Photo Unblur, Portrait Light, and the ultimate editing tool -Magic Editor. Users can access these tools only if they are IOS 15 for iPhones, advanced versions of Android OS 8 for Android mobiles, and ChromeOS 118+ for Chromebooks with at least 4GB RAM In addition, you will see features like the magic eraser that were previously available to subscribers as they're not in any of the free versions. To address this problem, Google has incorporated an app known as Magic Editor into the Pixel 8 series that enables sophisticated photo manipulation. Non-Pixel users can take advantage of the free Magic Editor but with the limit of 10 uses as an option while they subscribe to a monthly service. The subscription grants unlimited access to Google sites. This last step, however, shows a desire on the part of Google to provide ample editing features for the general people. submitted by /u/MediaPractNews [link] [comments]
Apple CEO Tim Cook’s presence in Singapore marks a pivotal moment in the tech giant’s expansion strategy, particularly in Southeast Asia…Continue reading on Medium »
It's called Red Rose and it is about an app which can access camera's and locations and everything and be used to stall people. It is a horror series. The show also speaks about the dark web. The show follows teenagers who are in their summer vacation and are about to enter college when they become a target of the app. submitted by /u/Zeldion_ [link] [comments]
You could watch Netflix live with people around the world. There’d be a chat you can open at any point and make comments about the show/movie or reply to other people comments. That’d be so cool idc (im lonely) submitted by /u/Chemical_Ad_3245 [link] [comments]
Hey everyone, I hope you’re all doing well! I recently created a video discussing Netflix’s adaptation of the series and wanted to share it with you all. I’m really passionate about this topic and would love to hear your thoughts and feedback. Here’s the link to the video: https://youtu.be/2Ol6dKeEjzI?si=5C0jeg6rXpKBVAzO Looking forward to hearing from you all and discussing this exciting show! submitted by /u/Rough-Raspberry3109 [link] [comments]
This is partially me being mad at Microsoft, partially a warning for others. As of the last 24 hours, I have been completely locked out of my @msn.com email address, and it is apparently entirely out of my control. Tl;dr: My 15+ year old @msn.com email address has been involved in at least 17 data breaches, and over the last 5+ years, someone has been attempting to login to the email, failing at the 2fa step. As of the last 24 hours, after 10 login attempts on 04/17/2024 and 14 attempts on 4/16/2024, my account is hard locked due to too many failed login attempts. Resetting the password has not helped, and I can not login any where, any way. I am left with nothing. 15+ year old email address, locked away. Okay, now for the long stuff. I have had my @msn.com email address since I was a child no older than 10 years old. While it's not really a communication email today, I do have some very old accounts tied to it, and I still use it for Microsoft related services. I find myself occasionally referring to old sent emails, finding specific account information I have forgotten, etc... The first databreach I can find my email in dates back to 2013(!!), and there are 16 more after that. Those are just the ones on haveibeenpwned, and I can only assume some passwords were leaked along the way. At some point, I enabled email based two factor authentication, and linked it to my gmail address. Eventually, I discovered you can bypass the password to login and use only the email 2fa code. I forgot the password over time, but could still access the email account using 2fa. I would occasionally reset the password from time to time, but I would not face any issues. Up until today. Over the last 5 or some years, someone has gotten ahold of the email from whatever ungodly breach they found, and has been running a tireless, brutal campaign of login attempts against the address. I first noticed when I checked my Gmail only to find an unread Microsoft 2fa code I never asked for. Upon investigating, the IPs logging in would come from various parts of China, and that general region. Always different. I confirmed with the Microsoft account access logs that the logins were failing at the 2fa step and they never got in. But they have been persistent. For 5 whole years, this campaign would run for a few days, to a week, and then stop for a week or so. I confirmed with the Microsoft account access logs that the logins were failing at the 2fa step and they never got in. But they have been persistent. The whole time, it was nothing more than a nuisance. I had confirmed no breach was actually occurring, and Microsoft itself even says in the 2fa email to not worry about it if you didn't request the code. Alas, for the first time today, the attackers have succeeded, in some small part, in their attack. I recently ordered something online, and the correspondence was going to my @msn.com account. For the last 2 days, I was able to login to the email address and check for updates. Today, however, I was met with a screen telling me my account was completely locked, for an indefinite period of time, due to to many failed login attempts. It offered me a password reset link, and a way to login to a different Microsoft account. That is all I am given regarding my 15+ year old email account. Resetting the password works. However I still can not log in. I am at a loss at this point. I don't even have an idea of how long it will be until my account is unlocked, and even when it is, I am now permanently at the mercy of the attackers who have used Microsofts own protocols to DOS my email account. So I come here with a warning to be screamed: Just because the 2fa email says you don't need to worry, you should. submitted by /u/Secret-Ad-2253 [link] [comments]
It can stop your PC from starting entirely. I was only saved by the fact that I had a restore point to go back to. Auto repair only made me wait to tell me "Your PC didn't start correctly." Clicking "restart" on this screen simply restarted the whole failing auto repair cycle again Fantastic. Thanks, Microsoft. EDIT: If you hit this issue, get to the "System Restore" screen and restore to the latest point (however that may be since they apparently change their menu options a lot - the "fix" article mentioned options that are no longer present). If you don't have a restore point to get to, you're screwed and need to reinstall. If you're dual-booted like me, you're probably double-screwed since Windows steps all over partitions when installing (which is why *NIX based systems have to be installed after Windows in a dual-boot setup - backup additional OS data in multi-boot setups before reinstalling Windows). submitted by /u/i_lost_waldo [link] [comments]
I'm trying to watch mockingjay part 1 and when I go to click the play button it just plays the 20 second clip from the movie and that's it. Been trying to troubleshoot but idk what the issue is the only thing I haven't tried is turning the computer off and back on. It will let me play the other films though just not mockingjay part 1 submitted by /u/Starfield_of_Nyx [link] [comments]
Neal Brennan has defied the Netflix 3rd Special burnout pattern with a really good new show. It's funny, it's relevant and above all intelligent. Just finished it and highly recommend. submitted by /u/lockdownsurvivor [link] [comments]
My office programs (Word, Powerpoint, etc) only index files saved to One Drive and not local files. Is there a way to force them to show local files too? submitted by /u/cogitatingspheniscid [link] [comments]
Going to take a wild guess and say netflix (on iphone) with ads has turned the feature to cast off at least for chromecast users? i restarted my phone, deleted and redownloaded netflix, aswell as went on all of my other streaming services to make sure they all still worked ( which they did ) is anyone else noticing this? i knew netflix was a nasty money hungry corp but why make it so users cant even stream the app to the tv with devices they previously could use.. pathetic and i will probably be cancelling my netflix subscription that ive literally had for half of my life. lol nice job killing your website netflix, no one will miss you, only the memory of what you started as. submitted by /u/Motor-Wrongdoer-6063 [link] [comments]
Has anyone found a way to stop Netflix from making that loud ass sound when opening the app on a smart TV? It scares me and my dog every time! Bonus points for the one YouTube makes too. submitted by /u/indigoskunk [link] [comments]
So I've migrated a Mojang account to a Microsoft one about 2 years ago and I don't recall the email I made for Microsoft. I have the original email, used for the Mojang account but I'm unsure if there's anyway I can check different names I might've used for that Microsoft account, or just check things connected to my original email. Help would be very appreciated, thanks. submitted by /u/Spooderman_77 [link] [comments]
For some reason I've had many users notify me of many legit emails going to their junk emails. It's odd because some of the legit emails get delivered just fine while others go to junk mail. Nothing has changed on our backend or no Exchange email policies have been modified. I don't believe any client rules have been created either (AFAIK) Anyone else having this issue, which probably started in the last week or two? submitted by /u/JahMusicMan [link] [comments]
Exactly as the title says, but I'll add more details since its a more technical issue. I'm using Firefox, with the only three extensions being a uBlock, 600% Sound Volume (Literal name), and Return YouTube Dislikes. Already tried clearing my cache and disabling my adblock but no dice. I'm trying to watch the new Jimmy Carr special that was added but its just giving me that 30 second trailer, and the only way I got it to play the actual 60 minute show was by re-opening the tab (I didn't refresh, literally used CTRL + SHIFT + T and it played the right show.) submitted by /u/PM_ME_YUYUKO_PICS [link] [comments]
I created a video in ClipChamp and edited it and exported to MP4. All well and good. But I want to save the webm file, in case I need to make further edits. ClipChamp saved my unedited webm file in my Downloads folder. But I don't see a way to export the edit version of the webm file. I want to copy it to my other computer. Thanks! submitted by /u/nrgins [link] [comments]
I used Stack and found the concept very appealing. However, it ultimately fell short due to a key limitation. Documents scanned within Stack don't integrate with your existing Google Drive folder structure. Here's the problem: Isolated Folders: Folders you create in Stack aren't accessible in Google Drive. This means you can't leverage your existing organization system for scanned documents. You'd need to create a separate folder structure within Stack itself. Missing Subfolder Information: Scanned documents are saved to a single "Stack" folder in Drive, with no indication of their original subfolders within the app. This forces you to manually sort them later. Overall, Stack has potential, but its lack of integration with Google Drive's folder system makes it cumbersome for long-term document management. submitted by /u/Professional_Tap5910 [link] [comments]
I used a rufus made bootable USB C with a Windows 11 image installed to boot my XPS1315 to. It boots to the USB fine, proceed to install Windows but it won't display ony SSD drives when it gets to the install portion of Winows. Any ideas on how to make it automatically load the SSD drive on the computer when boot with a USB? submitted by /u/UA1VM [link] [comments]
Use of AI tools is not disclosed in the credits. (Source: https://petapixel.com/2024/04/15/netflix-accused-of-using-ai-photos-in-true-crime-documentary/) submitted by /u/dangers93 [link] [comments]
Hi Guys, My client wants a booking calendar only for the 12th May and he wants few time slot bookable up to 4 people per session. I figured out how to amend the attendees but I don't understand how the time slot works. I.e I select 9-10.30 // 11-11.45 // 15-16.30 I expect that I see a 9-10am slot as a customer and book that but when I go to check the times they comes on single box time... I might not understand how the time slot system is intended to work. Can someone advise please? Thanks everyone!!! submitted by /u/ShottySeba [link] [comments]
Showing error when connected to Dock that ‘The last USB Device you connected to this computer malfunctioned’. submitted by /u/DragOk5615 [link] [comments]
I need help, I bought something on the Xbox store for 10 dollars. I check my accounts and the money is gone. I check back a few days later, and the money is back in my account and it says I never had the transaction. I still have the things I bought on the Xbox, I am just worried I might get charged for it when I don't have the money in my account. submitted by /u/Fluburb [link] [comments]
Looks like this: https://i.imgur.com/b1MfvW5.png I observe it very regularly, didn't notice any pattern behind the occurrences. submitted by /u/Nanohaystack [link] [comments]
My company recently experienced issues with storage capacity. We have been operating under 1.6tb for years without any issue. We hit our capacity last week and decided to increase capacity by roughly 33% adding 500gb. In the last 3 days we have consumed 300gb of our additional storage. This seems odd as over 5 years we had only, presumably, consumed 1.6tb. My question is, how does autosave and versioning of documents impact storage? We recently had a large annual meeting where we had roughly 20 team members interacting with a 256gb PowerPoint. I am curious if all the versions created off that PowerPoint is what caused the explosion of capacity utilization? If so, are there version settings we can apply to mitigate this in the future? submitted by /u/West_Coat_4304 [link] [comments]
Is anyone else annoyed by how much streaming service prices have skyrocketed over the last decade? When Netflix first launched, it was affordable and had an excellent selection of movies and series. Now, ten years later, the price for the standard plan has shot up by 62%. Moreover, to catch all the mainstream series, it feels like you need to subscribe to five different streaming services. It seems streaming services initially undercut linear TV with unsustainably low prices, using cheap money to driving out competition - a tactic aimed at monopolizing the market. Now that they've weakened their competitors, the real costs are starting to hit us. I've gone back to linear TV and their online libraries, which might not have everything but have decent series and documentaries andare often even free. Has anyone else felt pushed back to traditional TV or found other cost-effective alternatives to manage the rising costs of streaming services? Apart from raising the black flag, of course? https://preview.redd.it/t6ttjee1t8vc1.png?width=874&format=png&auto=webp&s=c60b6d18f281da819305c40ee5664b9f410dcc46 submitted by /u/Manoure_ [link] [comments]
I just finished baby reindeer. I was thinking wow the actor is amazing, great job. Then I started researching him and I found out….. he’s playing himself… this … ITS HIS STORY?! I was blown away. I didn’t know or have a clue about this guy. He’s a comedian and he’s done this show for audiences in real life. Wow. He’s a genius, a creative genius. And I hope Netflix keeps him as a producer or something because I can see him producing a few hits. Incredible guy, incredible actor, wow. submitted by /u/hi_goodbye21 [link] [comments]
Hey guys! This may be a dumb thing to ask here but I’ve exhausted all the means possible to get this sorted out and I’m really getting frustrated cause this will affect my work. So I have no means to retrieve any data from my phone that was reset, including the MS Authenticator app (all accounts linked to it prior) Now, I couldnt log in on my work website cause it asks that I use my MS app to authenticate. But since I don’t have it linked due to re setting it, i can’t proceed with logging in. Is there a way around it? (Call/Text not possible cause my company has set it to the other option which is QR code or URL and Code- i dont have access to any of these) Have also tried reaching out to the team responsible but no luck. I’ve a deadline tomorrow night and I really have no other options but to have to log in to finish it. Is there a way to bypass this thing? Idk If bypass is the word but I’m desperate to get it fixed. Thank you for your insights in advance. submitted by /u/InternationalTell321 [link] [comments]
Originally Answered: What can I improve on for my next FAANG Sr SWE interview?
I’m going to read between the lines and assume that you are working at a grade below senior at a company which is not a FAANG. I’m also assuming that you feel that you are ready and that you’ve already done the obvious, read the books, practiced questions etc.
Your senior eng interview has 3 facets, coding, system design and behavioral.
Your levers to do better at each are:
To get better at coding interviews, interview more candidates. Seeing what others do well and less well is very helpful. This really applies to all sorts of interviews but IMO is most helpful for coding interviews.
To get better at system design interviews, read more design docs at your existing company, attend more design reviews, and force yourself to participate. Comment, ask questions. It doesn’t matter if you’re off the mark. See what doesn’t make sense to you and challenge it.
To get better at behavioral interviews, read your perf packets and the feedback from your coworkers. Read the docs that you wrote on your career plans (If you don’t have any, ask yourself why and start one). Reflect, regularly, on what has been hardest in your career, what you have done very well, where you struggled, what you would do differently.
I’d like to answer first in general — about attrition rates in the tech sector — and then about Amazon specifically.
Industry-Wide Retention
Retention in the US high-tech industry is very challenging. I believe there are two main reasons for that.
First, there is an acute shortage of qualified workers, which means companies are desperate to get employees anywhere they can, including — sometimes mainly — by poaching them from other companies. This is why so many companies moved into the Seattle East Side in the ’90s or South Lake Union in the last five years, for example: to poach from Microsoft and Amazon, respectively.
I remember the crazy late-90’s in the Israel high-tech industry. People would come in, work for 6–12 months, then jump ship for a fancier title and a bump in pay. It was insane; it was disgusting (I mean that literally: I would sometimes feel physically sick thinking about how stupid it all was.)
The second reason — which I’m not as certain about — is that the high-tech industry is so incredibly dynamic. Things change constantly: new companies spring up and grow like crazy (Uber anyone?); “old” companies that were considered the cream of the crop a couple of years ago are suddenly untouchable (Yahoo!). New technologies explode onto the scene and old ones stagnate.
Not only does that create a lot of churn as companies keep growing and shrinking; it also creates incredible pressure on tech workers to stay on top of their game. We’re always looking for the next big technology, the next big field, then next big product… The sad part is that a lot of it is just hype, but the psychological pressure is real enough, and it makes people move around always looking for the next great opportunity.
Amazon
The reason I want to talk about Amazon — which generally suffers from the same problems I’ve described above — is that there’s a perception in the public that Amazon is somehow worse than the rest of the industry; that it has awful attrition, because it’s a terrible place to work. I’ve tackled that in a couple of other answers (e.g. this one and this one), but it’s a very persistent myth.
Much of the fault is in reports like this one from PayScale, which then get regurgitated in hundreds of stories like this one (from BuzzFeed). The basic story seems very simple: the average tenure of an Amazon employee is about a year, which is — undoubtedly — really low, even in tech-industry terms.
That’s a great example of (supposedly) Benjamin Disraeli’s famous quote, “lies, damned lies and statistics”. There are at least two reasons why this number is completely meaningless:
Short tenure does not mean high attrition: in the last 6–7 years the number of employees at Amazon has grown exponentially, and I mean this literally:
This means that at any time, pretty much, about 20–40% of all Amazon employees have joined less than a year ago. It’s no really surprising that they have a short tenure, is it?
Measuring retention is not trivial, but this methodology is just plain dumb (or maybe intentionally misleading).
Amazon is not (only) a tech company: sure, if you compare Amazon to Google and Facebook it comes out bad. But unlike those companies, the majority of Amazon employees are not tech workers. They’re warehouse workers, drivers, customer-service people, etc. Many of them are temp workers, and many others are not considering the job as a career.
There is a good discussion to be had about how Amazon treats these workers and whether it can do better, but it makes no sense to compare it with Microsoft or Apple; Walmart and Target would be much better comparisons.
Machine learning is the study of computer algorithms that improve automatically through experience. It is seen as a subset of artificial intelligence. Machine Learning explores the study and construction of algorithms that can learn from and make predictions on data. You select a model to train and then manually perform feature extraction. Used to devise complex models and algorithms that lend themselves to a prediction which in commercial use is known as predictive analytics.
Below are the most common Machine Learning use cases and capabilities:
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Supervised learning is the machine learning task of inferring a function from labeled training data. The training data consist of a set of training examples.
Algorithms: Support Vector Machines, Regression, Naive Bayes, Decision Trees, K-nearest Neighbor Algorithm and Neural Networks
Example: If you built a fruit classifier, the labels will be “this is an orange, this is an apple and this is a banana”, based on showing the classifier examples of apples, oranges and bananas.
Unsupervised learning is a type of machine learning algorithm used to draw inferences from datasets consisting of input data without labelled responses.
Algorithms: Clustering, Anomaly Detection, Neural Networks and Latent Variable Models
Example: In the same example, a fruit clustering will categorize as “fruits with soft skin and lots of dimples”, “fruits with shiny hard skin” and “elongated yellow fruits”.
Explain the difference between supervised and unsupervised machine learning?
In supervised machine learning algorithms, we have to provide labeled data, for example, prediction of stock market prices, whereas in unsupervised we need not have labeled data, for example, classification of emails into spam and non-spam.
What is deep learning, and how does it contrast with other machine learning algorithms?
Deep learning is a subset of machine learning that is concerned with neural networks: how to use backpropagation and certain principles from neuroscience to more accurately model large sets of unlabelled or semi-structured data. In that sense, deep learning represents an unsupervised learning algorithm that learns representations of data through the use of neural nets.
What is Problem Formulation in Machine Learning?
The problem formulation phase of the ML Pipeline is critical, and it’s where everything begins. Typically, this phase is kicked off with a question of some kind. Examples of these kinds of questions include: Could cars really drive themselves? What additional product should we offer someone as they checkout? How much storage will clients need from a data center at a given time?
The problem formulation phase starts by seeing a problem and thinking “what question, if I could answer it, would provide the most value to my business?” If I knew the next product a customer was going to buy, is that most valuable? If I knew what was going to be popular over the holidays, is that most valuable? If I better understood who my customers are, is that most valuable?
However, some problems are not so obvious. When sales drop, new competitors emerge, or there’s a big change to a company/team/org, it can be easy to say, “I see the problem!” But sometimes the problem isn’t so clear. Consider self-driving cars. How many people think to themselves, “driving cars is a huge problem”? Probably not many. In fact, there isn’t a problem in the traditional sense of the word but there is an opportunity. Creating self-driving cars is a huge opportunity. That doesn’t mean there isn’t a problem or challenge connected to that opportunity. How do you design a self-driving system? What data would you look at to inform the decisions you make? Will people purchase self-driving cars?
Part of the problem formulation phase includes seeing where there are opportunities to use machine learning.
To formulate a problem in ML, consider the following questions:
Is machine learning appropriate for this problem, and why or why not?
What is the ML problem if there is one, and what would a success metric look like?
What kind of ML problem is this?
Is the data appropriate?
Machine Learning Problem Formulation Examples:
1) Amazon recently began advertising to its customers when they visit the company website. The Director in charge of the initiative wants the advertisements to be as tailored to the customer as possible. You will have access to all the data from the retail webpage, as well as all the customer data.
ML is appropriate because of the scale, variety and speed required. There are potentially thousands of ads and millions of customers that need to be served customized ads immediately as they arrive to the site.
The problem is ads that are not useful to customers are a wasted opportunity and a nuisance to customers, yet not serving ads at all is a wasted opportunity. So how does Amazon serve the most relevant advertisements to its retail customers?
Success would be the purchase of a product that was advertised.
This is a supervised learning problem because we have a labeled data point, our success metric, which is the purchase of a product.
This data is appropriate because it is both the retail webpage data as well as the customer data.
What are the different Algorithm techniques in Machine Learning?
The different types of techniques in Machine Learning are ● Supervised Learning ● Unsupervised Learning ● Semi-supervised Learning ● Reinforcement Learning ● Transduction ● Learning to Learn
What’s the difference between a generative and discriminative model?
A generative model will learn categories of data while a discriminative model will simply learn the distinction between different categories of data. Discriminative models will generally outperform generative models on classification tasks.
What Are the Applications of Supervised Machine Learning in Modern Businesses?
Applications of supervised machine learning include: ● Email Spam Detection Here we train the model using historical data that consists of emails categorized as spam or not spam. This labeled information is fed as input to the model. ● Healthcare Diagnosis By providing images regarding a disease, a model can be trained to detect if a person is suffering from the disease or not. ● Sentiment Analysis This refers to the process of using algorithms to mine documents and determine whether they’re positive, neutral, or negative in sentiment. ● Fraud Detection Training the model to identify suspicious patterns, we can detect instances of possible fraud.
What Is Semi-supervised Machine Learning?
Supervised learning uses data that is completely labeled, whereas unsupervised learning uses no training data. In the case of semi-supervised learning, the training data contains a small amount of labeled data and a large amount of unlabeled data.
What Are Unsupervised Machine Learning Techniques?
There are two techniques used in unsupervised learning: clustering and association.
Clustering ● Clustering problems involve data to be divided into subsets. These subsets, also called clusters, contain data that are similar to each other. Different clusters reveal different details about the objects, unlike classification or regression.
Association ● In an association problem, we identify patterns of associations between different variables or items. ● For example, an eCommerce website can suggest other items for you to buy, based on the prior purchases that you have made, spending habits, items in your wish list, other customers’ purchase habits, and so on.
What evaluation approaches would you work to gauge the effectiveness of a machine learning model?
You would first split the dataset into training and test sets, or perhaps use cross-validation techniques to further segment the dataset into composite sets of training and test sets within the data. You should then implement a choice selection of performance metrics: here is a fairly comprehensive list. You could use measures such as the F1 score, the accuracy, and the confusion matrix. What’s important here is to demonstrate that you understand the nuances of how a model is measured and how to choose the right performance measures for the right situations.
What Are the Three Stages of Building a Model in Machine Learning?
The three stages of building a machine learning model are: ● Model Building Choose a suitable algorithm for the model and train it according to the requirement ● Model Testing Check the accuracy of the model through the test data ● Applying the Mode Make the required changes after testing and use the final model for real-time projects. Here, it’s important to remember that once in a while, the model needs to be checked to make sure it’s working correctly. It should be modified to make sure that it is up-to-date.
A data scientist wants to visualize the correlation between features in their dataset. What tool(s) can they use to visualize this in a correlation matrix?
Answer: Matplotlib, Seaborn
You are preprocessing a dataset that includes categorical features. You want to determine which categories of particular features are most common in your dataset. Which basic descriptive statistic could you use? Answer: Mode
What are some examples of categorical features?
In machine learning and data science, categorical features are variables that can take on one of a limited number of values. For example, a categorical feature might represent the color of a car as Red, Yellow, or Blue. In general, categorical features are used to represent discrete characteristics (such as gender, race, or profession) that can be sorted into categories. When working with categorical features, it is often necessary to convert them into numerical form so that they can be used by machine learning algorithms. This process is known as encoding, and there are several different ways to encode categorical features. One common approach is to use a technique called one-hot encoding, which creates a new column for each possible category. For example, if there are three colors (Red, Yellow, and Blue), then each color would be represented by a separate column where all the values are either 0 or 1 (1 indicates that the row belongs to that category). Machine learning algorithms can then treat each column as a separate feature when training the model. Other approaches to encoding categorical data include label encoding and target encoding. These methods are often used in conjunction with one-hot encoding to improve the accuracy of machine learning models.
How many variables are enough for multiple regressions?
Which of the following is most suitable for supervised learning?
Answer:Identifying birds in an image
You’ve plotted the correlation matrix of your dataset’s features and realized that two of the features present a high negative correlation (-0.95). What should you do?
Answer: Remove one of the features
You are in charge of preprocessing the data your publishing company wants to use for a new ML model they’re building, which aims to predict the influence an academic journal will have in its field. The preprocessing step is necessary to prepare the data for model training. What type of issue with the data might you encounter during this preprocessing phase?
Answer: Outliers, Missing values
A Machine Learning Engineer is creating and preparing data for a linear regression model. However, while preparing the data, the Engineer notices that about 20% of the numerical data contains missing values in the same two columns. The shape of the data is 500 rows by 4 columns, including the target column. How can the Engineer handle the missing values in the data?
(Select TWO.)
Answer: Fill he missing values with mean of the column, Impute the missing values using regression
A Data Scientist created a correlation matrix between nine variables and the target variable. The correlation coefficient between two of the numerical variables, variable 1 and variable 5, is -0.95. How should the Data Scientist interpret the correlation coefficient?
Answer: As variable 1 increases, variable 5 decreases
An advertising and analytics company uses machine learning to predict user response to online advertisements using a custom XGBoost model. The company wants to improve its ML pipeline by porting its training and inference code, written in R, to Amazon SageMaker, and do so with minimal changes to the existing code.
Answer: Use the Build Your Own Container (BYOC) Amazon Sagemaker option. Create a new docker container with the existing code. Register the container in Amazon Elastic Container registry. with the existing code. Register the container in Amazon Elastic Container Registry. Finally run the training and inference jobs using this container.
An ML engineer at a text analytics startup wants to develop a text classification model. The engineer collected large amounts of data to develop a supervised text classification model. The engineer is getting 99% accuracy on the dataset but when the model is deployed to production, it performs significantly worse. What is the most likely cause of this?
Answer: The engineer did not split the data to validate the model on unseen data.
For a classification problem, what does the loss function measure? Answer: A loss function measures how accurate your prediction is with respect to the true values.
Gradient Descent is an important optimization method. What are 3 TRUE statements about the gradient descent method?
(Select THREE)
Answer: It tries to find the minimum of a loss function. It can involve multiple iterations It uses learning rate to multiply the effect of gradients
What is Deep Learning?
Deep Learning is nothing but a paradigm of machine learning which has shown incredible promise in recent years. This is because of the fact that Deep Learning shows a great analogy with the functioning of the neurons in the human brain.
What is the difference between machine learning and deep learning?
Machine learning is a field of computer science that gives computers the ability to learn without being explicitly programmed. Machine learning can be categorized in the following four categories. 1. Supervised machine learning, 2. Semi-supervised machine learning, 3. Unsupervised machine learning, 4. Reinforcement learning.
Deep Learning is a subfield of machine learning concerned with algorithms inspired by the structure and function of the brain called artificial neural networks.
• The main difference between deep learning and machine learning is due to the way data is presented in the system. Machine learning algorithms almost always require structured data, while deep learning networks rely on layers of ANN (artificial neural networks).
• Machine learning algorithms are designed to “learn” to act by understanding labeled data and then use it to produce new results with more datasets. However, when the result is incorrect, there is a need to “teach them”. Because machine learning algorithms require bulleted data, they are not suitable for solving complex queries that involve a huge amount of data.
• Deep learning networks do not require human intervention, as multilevel layers in neural networks place data in a hierarchy of different concepts, which ultimately learn from their own mistakes. However, even they can be wrong if the data quality is not good enough.
• Data decides everything. It is the quality of the data that ultimately determines the quality of the result.
• Both of these subsets of AI are somehow connected to data, which makes it possible to represent a certain form of “intelligence.” However, you should be aware that deep learning requires much more data than a traditional machine learning algorithm. The reason for this is that deep learning networks can identify different elements in neural network layers only when more than a million data points interact. Machine learning algorithms, on the other hand, are capable of learning by pre-programmed criteria.
Can you explain the differences between supervised, unsupervised, and reinforcement learning?
In supervised learning, we train a model to learn the relationship between input data and output data. We need to have labeled data to be able to do supervised learning. With unsupervised learning, we only have unlabeled data. The model learns a representation of the data. Unsupervised learning is frequently used to initialize the parameters of the model when we have a lot of unlabeled data and a small fraction of labeled data. We first train an unsupervised model and, after that, we use the weights of the model to train a supervised model. In reinforcement learning, the model has some input data and a reward depending on the output of the model. The model learns a policy that maximizes the reward. Reinforcement learning has been applied successfully to strategic games such as Go and even classic Atari video games.
What is the reason for the popularity of Deep Learning in recent times?
Now although Deep Learning has been around for many years, the major breakthroughs from these techniques came just in recent years. This is because of two main reasons: • The increase in the amount of data generated through various sources • The growth in hardware resources required to run these models GPUs are multiple times faster and they help us build bigger and deeper deep learning models in comparatively less time than we required previously
Reinforcement Learning allows to take actions to max cumulative reward. It learns by trial and error through reward/penalty system. Environment rewards agent so by time agent makes better decisions. Ex: robot=agent, maze=environment. Used for complex tasks (self-driving cars, game AI).
RL is a series of time steps in a Markov Decision Process:
1. Environment: space in which RL operates 2. State: data related to past action RL took 3. Action: action taken 4. Reward: number taken by agent after last action 5. Observation: data related to environment: can be visible or partially shadowed
Explain Ensemble learning.
In ensemble learning, many base models like classifiers and regressors are generated and combined together so that they give better results. It is used when we build component classifiers that are accurate and independent. There are sequential as well as parallel ensemble methods.
Parametric models are those with a finite number of parameters. To predict new data, you only need to know the parameters of the model. Examples include linear regression, logistic regression, and linear SVMs. Non-parametric models are those with an unbounded number of parameters, allowing for more flexibility. To predict new data, you need to know the parameters of the model and the state of the data that has been observed. Examples include decision trees, k-nearest neighbors, and topic models using latent Dirichlet analysis.
What are support vector machines?
Support vector machines are supervised learning algorithms used for classification and regression analysis.
What is batch statistical learning?
Statistical learning techniques allow learning a function or predictor from a set of observed data that can make predictions about unseen or future data. These techniques provide guarantees on the performance of the learned predictor on the future unseen data based on a statistical assumption on the data generating process.
What Will Happen If the Learning Rate is Set inaccurately (Too Low or Too High)?
When your learning rate is too low, training of the model will progress very slowly as we are making minimal updates to the weights. It will take many updates before reaching the minimum point. If the learning rate is set too high, this causes undesirable divergent behavior to the loss function due to drastic updates in weights. It may fail to converge (model can give a good output) or even diverge (data is too chaotic for the network to train).
What Is The Difference Between Epoch, Batch, and Iteration in Deep Learning?
• Epoch – Represents one iteration over the entire dataset (everything put into the training model). • Batch – Refers to when we cannot pass the entire dataset into the neural network at once, so we divide the dataset into several batches. • Iteration – if we have 10,000 images as data and a batch size of 200. then an epoch should run 50 iterations (10,000 divided by 50).
Why Is Tensorflow the Most Preferred Library in Deep Learning?
Tensorflow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and Torch. Tensorflow supports both CPU and GPU computing devices.
What Do You Mean by Tensor in Tensorflow?
A tensor is a mathematical object represented as arrays of higher dimensions. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”
Explain a Computational Graph.
Everything in TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates, Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”
Cognition: Reasoning on top of data (Regression, Classification, Pattern Recognition)
What is the difference between classification and regression?
Classification is used to produce discrete results, classification is used to classify data into some specific categories. For example, classifying emails into spam and non-spam categories. Whereas, We use regression analysis when we are dealing with continuous data, for example predicting stock prices at a certain point in time.
Explain the Bias-Variance Tradeoff.
Predictive models have a tradeoff between bias (how well the model fits the data) and variance (how much the model changes based on changes in the inputs). Simpler models are stable (low variance) but they don’t get close to the truth (high bias). More complex models are more prone to overfitting (high variance) but they are expressive enough to get close to the truth (low bias). The best model for a given problem usually lies somewhere in the middle.
What is the difference between stochastic gradient descent (SGD) and gradient descent (GD)?
Both algorithms are methods for finding a set of parameters that minimize a loss function by evaluating parameters against data and then making adjustments. In standard gradient descent, you’ll evaluate all training samples for each set of parameters. This is akin to taking big, slow steps toward the solution. In stochastic gradient descent, you’ll evaluate only 1 training sample for the set of parameters before updating them. This is akin to taking small, quick steps toward the solution.
How Can You Choose a Classifier Based on a Training Set Data Size?
When the training set is small, a model that has a right bias and low variance seems to work better because they are less likely to overfit. For example, Naive Bayes works best when the training set is large. Models with low bias and high variance tend to perform better as they work fine with complex relationships.
Explain Latent Dirichlet Allocation (LDA)
Latent Dirichlet Allocation (LDA) is a common method of topic modeling, or classifying documents by subject matter. LDA is a generative model that represents documents as a mixture of topics that each have their own probability distribution of possible words. The “Dirichlet” distribution is simply a distribution of distributions. In LDA, documents are distributions of topics that are distributions of words.
Explain Principle Component Analysis (PCA)
PCA is a method for transforming features in a dataset by combining them into uncorrelated linear combinations. These new features, or principal components, sequentially maximize the variance represented (i.e. the first principal component has the most variance, the second principal component has the second most, and so on). As a result, PCA is useful for dimensionality reduction because you can set an arbitrary variance cutoff.
PCA is a dimensionality reduction technique that enables you to identify the correlations and patterns in the dataset so that it can be transformed into a dataset of significantly lower dimensions without any loss of important information.
• It is an unsupervised statistical technique used to examine the interrelations among a set of variables. It is also known as a general factor analysis where regression determines a line of best fit.
• It works on a condition that while the data in a higher-dimensional space is mapped to data in a lower dimension space, the variance or spread of the data in the lower dimensional space should be maximum.
PCA is carried out in the following steps
1. Standardization of Data 2. Computing the covariance matrix 3. Calculation of the eigenvectors and eigenvalues 4. Computing the Principal components 5. Reducing the dimensions of the Data.
The F1 score is a measure of a model’s performance. It is a weighted average of the precision and recall of a model, with results tending to 1 being the best, and those tending to 0 being the worst. You would use it in classification tests where true negatives don’t matter much.
When should you use classification over regression?
Classification produces discrete values and dataset to strict categories, while regression gives you continuous results that allow you to better distinguish differences between individual points. You would use classification over regression if you wanted your results to reflect the belongingness of data points in your dataset to certain explicit categories (ex: If you wanted to know whether a name was male or female rather than just how correlated they were with male and female names.)
How do you ensure you’re not overfitting with a model?
This is a simple restatement of a fundamental problem in machine learning: the possibility of overfitting training data and carrying the noise of that data through to the test set, thereby providing inaccurate generalizations. There are three main methods to avoid overfitting: 1- Keep the model simpler: reduce variance by taking into account fewer variables and parameters, thereby removing some of the noise in the training data. 2- Use cross-validation techniques such as k-folds cross-validation. 3- Use regularization techniques such as LASSO that penalize certain model parameters if they’re likely to cause overfitting.
How Will You Know Which Machine Learning Algorithm to Choose for Your Classification Problem?
While there is no fixed rule to choose an algorithm for a classification problem, you can follow these guidelines: ● If accuracy is a concern, test different algorithms and cross-validate them ● If the training dataset is small, use models that have low variance and high bias ● If the training dataset is large, use models that have high variance and little bias
Why is Area Under ROC Curve (AUROC) better than raw accuracy as an out-of-sample evaluation metric?
AUROC is robust to class imbalance, unlike raw accuracy. For example, if you want to detect a type of cancer that’s prevalent in only 1% of the population, you can build a model that achieves 99% accuracy by simply classifying everyone has cancer-free.
What are the advantages and disadvantages of neural networks?
Advantages: Neural networks (specifically deep NNs) have led to performance breakthroughs for unstructured datasets such as images, audio, and video. Their incredible flexibility allows them to learn patterns that no other ML algorithm can learn. Disadvantages: However, they require a large amount of training data to converge. It’s also difficult to pick the right architecture, and the internal “hidden” layers are incomprehensible.
Define Precision and Recall.
Precision ● Precision is the ratio of several events you can correctly recall to the total number of events you recall (mix of correct and wrong recalls). ● Precision = (True Positive) / (True Positive + False Positive) Recall ● A recall is the ratio of a number of events you can recall the number of total events. ● Recall = (True Positive) / (True Positive + False Negative)
What Is Decision Tree Classification?
A decision tree builds classification (or regression) models as a tree structure, with datasets broken up into ever-smaller subsets while developing the decision tree, literally in a tree-like way with branches and nodes. Decision trees can handle both categorical and numerical data.
What Is Pruning in Decision Trees, and How Is It Done?
Pruning is a technique in machine learning that reduces the size of decision trees. It reduces the complexity of the final classifier, and hence improves predictive accuracy by the reduction of overfitting. Pruning can occur in: ● Top-down fashion. It will traverse nodes and trim subtrees starting at the root ● Bottom-up fashion. It will begin at the leaf nodes There is a popular pruning algorithm called reduced error pruning, in which: ● Starting at the leaves, each node is replaced with its most popular class ● If the prediction accuracy is not affected, the change is kept ● There is an advantage of simplicity and speed
What Is a Recommendation System?
Anyone who has used Spotify or shopped at Amazon will recognize a recommendation system: It’s an information filtering system that predicts what a user might want to hear or see based on choice patterns provided by the user.
What Is Kernel SVM?
Kernel SVM is the abbreviated version of the kernel support vector machine. Kernel methods are a class of algorithms for pattern analysis, and the most common one is the kernel SVM.
What Are Some Methods of Reducing Dimensionality?
You can reduce dimensionality by combining features with feature engineering, removing collinear features, or using algorithmic dimensionality reduction. Now that you have gone through these machine learning interview questions, you must have got an idea of your strengths and weaknesses in this domain.
How is KNN different from k-means clustering?
K-Nearest Neighbors is a supervised classification algorithm, while k-means clustering is an unsupervised clustering algorithm. While the mechanisms may seem similar at first, what this really means is that in order for K-Nearest Neighbors to work, you need labeled data you want to classify an unlabeled point into (thus the nearest neighbor part). K-means clustering requires only a set of unlabeled points and a threshold: the algorithm will take unlabeled points and gradually learn how to cluster them into groups by computing the mean of the distance between different points.
What are difference between Data Mining and Machine learning?
Machine learning relates to the study, design, and development of the algorithms that give computers the capability to learn without being explicitly programmed. While data mining can be defined as the process in which the unstructured data tries to extract knowledge or unknown interesting patterns. During this processing machine, learning algorithms are used.
Naive Bayes methods are a set of supervised learning algorithms based on applying Bayes’ theorem with the “naive” assumption of conditional independence between every pair of features given the value of the class variable. Bayes’ theorem states the following relationship, given class variable y and dependent feature vector X1through Xn:
What is PCA (Principal Component Analysis)? When do you use it?
Principal component analysis (PCA) is a statistical method used in Machine Learning. It consists in projecting data in a higher dimensional space into a lower dimensional space by maximizing the variance of each dimension.
The process works as following. We define a matrix A with > rows (the single observations of a dataset – in a tabular format, each single row) and @ columns, our features. For this matrix we construct a variable space with as many dimensions as there are features. Each feature represents one coordinate axis. For each feature, the length has been standardized according to a scaling criterion, normally by scaling to unit variance. It is determinant to scale the features to a common scale, otherwise the features with a greater magnitude will weigh more in determining the principal components. Once plotted all the observations and computed the mean of each variable, that mean will be represented by a point in the center of our plot (the center of gravity). Then, we subtract each observation with the mean, shifting the coordinate system with the center in the origin. The best fitting line resulting is the line that best accounts for the shape of the point swarm. It represents the maximum variance direction in the data. Each observation may be projected onto this line in order to get a coordinate value along the PC-line. This value is known as a score. The next best-fitting line can be similarly chosen from directions perpendicular to the first. Repeating this process yields an orthogonal basis in which different individual dimensions of the data are uncorrelated. These basis vectors are called principal components.
PCA is mostly used as a tool in exploratory data analysis and for making predictive models. It is often used to visualize genetic distance and relatedness between populations.
What are the pre-processing steps required for performing principal component analysis on a dataset?
PCA is a technique that is used for reducing the dimensionality of a dataset while still preserving as much of the variance as possible. It is commonly used in machine learning and data science, as it can help to improve the performance of models by making the data easier to work with. In order to perform PCA on a dataset, there are a few pre-processing steps that need to be undertaken.
First, any features that are strongly correlated with each other should be removed, as PCA will not be effective in reducing the dimensionality of the data if there are strong correlations present.
Next, any features that contain missing values should be imputed, as PCA cannot be performed on data that contains missing values.
Finally, the data should be scaled so that all features are on the same scale; this is necessary because PCA is based on the variance of the data, and if the scales of the features are different then PCA will not be able to accurately identify which features are most important in terms of variance.
Once these pre-processing steps have been completed, PCA can be performed on the dataset.
Principal component analysis (PCA) is a statistical technique that is used to reduce the dimensionality of a dataset. PCA is often used as a pre-processing step in machine learning and data science, as it can help to improve the performance of models. In order to perform PCA on a dataset, the data must first be scaled and centered. Scaling ensures that all of the features are on the same scale, which is important for PCA. Centering means that the mean of each feature is zero. This is also important for PCA, as PCA is sensitive to changes in the mean of the data. Once the data has been scaled and centered, PCA can be performed by computing the eigenvectors and eigenvalues of the covariance matrix. These eigenvectors and eigenvalues can then be used to transform the data into a lower-dimensional space.
Classifying data is a common task in machine learning. Suppose some given data points each belong to one of two classes, and the goal is to decide which class a new data point will be in. In the case of supportvector machines, a data point is viewed as a p-dimensional vector (a list of p numbers), and we want to know whether we can separate such points with a (p − 1)-dimensional hyperplane. This is called a linear classifier. There are many hyperplanes that might classify the data. One reasonable choice as the best hyperplane is the one that represents the largest separation, or margin, between the two classes. So, we choose the hyperplane so that the distance from it to the nearest data point on each side is maximized. If such a hyperplane exists, it is known as the maximum-margin hyperplane and the linear classifier it defines is known as a maximum-margin classifier; or equivalently, the perceptron of optimal stability. The best hyper plane that divides the data is H3.
SVMs are helpful in text and hypertext categorization, as their application can significantly reduce the need for labeled training instances in both the standard inductive and transductive settings.
Some methods for shallow semantic parsing are based on support vector machines.
Classification of images can also be performed using SVMs. Experimental results show that SVMs achieve significantly higher search accuracy than traditional query refinement schemes after just three to four rounds of relevance feedback.
Classification of satellite data like SAR data using supervised SVM.
Hand-written characters can be recognized using SVM.
What are the support vectors in SVM?
In the diagram, we see that the sketched lines mark the distance from the classifier (the hyper plane) to the closest data points called the support vectors (darkened data points). The distance between the two thin lines is called the margin.
To extend SVM to cases in which the data are not linearly separable, we introduce the hinge loss function, max (0, 1 – yi(w∙ xi − b)). This function is zero if x lies on the correct side of the margin. For data on the wrong side of the margin, the function’s value is proportional to the distance from the margin.
What are the different kernels in SVM?
There are four types of kernels in SVM. 1. LinearKernel 2. Polynomial kernel 3. Radial basis kernel 4. Sigmoid kernel
The most popular trees are: AdaBoost, Random Forest, and eXtreme Gradient Boosting (XGBoost).
AdaBoost is best used in a dataset with low noise, when computational complexity or timeliness of results is not a main concern and when there are not enough resources for broader hyperparameter tuning due to lack of time and knowledge of the user.
Random forests should not be used when dealing with time series data or any other data where look-ahead bias should be avoided, and the order and continuity of the samples need to be ensured. This algorithm can handle noise relatively well, but more knowledge from the user is required to adequately tune the algorithm compared to AdaBoost.
The main advantages of XGBoost is its lightning speed compared to other algorithms, such as AdaBoost, and its regularization parameter that successfully reduces variance. But even aside from the regularization parameter, this algorithm leverages a learning rate (shrinkage) and subsamples from the features like random forests, which increases its ability to generalize even further. However, XGBoost is more difficult to understand, visualize and to tune compared to AdaBoost and random forests. There is a multitude of hyperparameters that can be tuned to increase performance.
What are Artificial Neural Networks?
Artificial Neural networks are a specific set of algorithms that have revolutionized machine learning. They are inspired by biological neural networks. Neural Networks can adapt to changing the input, so the network generates the best possible result without needing to redesign the output criteria.
Artificial Neural Networks works on the same principle as a biological Neural Network. It consists of inputs which get processed with weighted sums and Bias, with the help of Activation Functions.
How Are Weights Initialized in a Network?
There are two methods here: we can either initialize the weights to zero or assign them randomly.
Initializing all weights to 0: This makes your model similar to a linear model. All the neurons and every layer perform the same operation, giving the same output and making the deep net useless.
Initializing all weights randomly: Here, the weights are assigned randomly by initializing them very close to 0. It gives better accuracy to the model since every neuron performs different computations. This is the most commonly used method.
What Is the Cost Function?
Also referred to as “loss” or “error,” cost function is a measure to evaluate how good your model’s performance is. It’s used to compute the error of the output layer during backpropagation. We push that error backwards through the neural network and use that during the different training functions. The most known one is the mean sum of squared errors.
With neural networks, you’re usually working with hyperparameters once the data is formatted correctly. A hyperparameter is a parameter whose value is set before the learning process begins. It determines how a network is trained and the structure of the network (such as the number of hidden units, the learning rate, epochs, batches, etc.).
The Convolutional neural networks are regularized versions of multilayer perceptron (MLP). They were developed based on the working of the neurons of the animal visual cortex.
The objective of using the CNN:
The idea is that you give the computer this array of numbers and it will output numbers that describe the probability of the image being a certain class (.80 for a cat, .15 for a dog, .05 for a bird, etc.). It works similar to how our brain works. When we look at a picture of a dog, we can classify it as such if the picture has identifiable features such as paws or 4 legs. In a similar way, the computer is able to perform image classification by looking for low-level features such as edges and curves and then building up to more abstract concepts through a series of convolutional layers. The computer uses low-level features obtained at the initial levels to generate high-level features such as paws or eyes to identify the object.
There are four layers in CNN: 1. Convolutional Layer – the layer that performs a convolutional operation, creating several smaller picture windows to go over the data. 2. Activation Layer (ReLU Layer) – it brings non-linearity to the network and converts all the negative pixels to zero. The output is a rectified feature map. It follows each convolutional layer. 3. Pooling Layer – pooling is a down-sampling operation that reduces the dimensionality of the feature map. Stride = how much you slide, and you get the max of the n x n matrix 4. Fully Connected Layer – this layer recognizes and classifies the objects in the image.
Pooling is used to reduce the spatial dimensions of a CNN. It performs down-sampling operations to reduce the dimensionality and creates a pooled feature map by sliding a filter matrix over the input matrix.
RNNs are a type of artificial neural networks designed to recognize the pattern from the sequence of data such as Time series, stock market and government agencies etc.
Recurrent Neural Networks (RNNs) add an interesting twist to basic neural networks. A vanilla neural network takes in a fixed size vector as input which limits its usage in situations that involve a ‘series’ type input with no predetermined size.
RNNs are designed to take a series of input with no predetermined limit on size. One could ask what’s\ the big deal, I can call a regular NN repeatedly too?
Sure can, but the ‘series’ part of the input means something. A single input item from the series is related to others and likely has an influence on its neighbors. Otherwise it’s just “many” inputs, not a “series” input (duh!). Recurrent Neural Network remembers the past and its decisions are influenced by what it has learnt from the past. Note: Basic feed forward networks “remember” things too, but they remember things they learnt during training. For example, an image classifier learns what a “1” looks like during training and then uses that knowledge to classify things in production. While RNNs learn similarly while training, in addition, they remember things learnt from prior input(s) while generating output(s). RNNs can take one or more input vectors and produce one or more output vectors and the output(s) are influenced not just by weights applied on inputs like a regular NN, but also by a “hidden” state vector representing the context based on prior input(s)/output(s). So, the same input could produce a different output depending on previous inputs in the series.
In summary, in a vanilla neural network, a fixed size input vector is transformed into a fixed size output vector. Such a network becomes “recurrent” when you repeatedly apply the transformations to a series of given input and produce a series of output vectors. There is no pre-set limitation to the size of the vector. And, in addition to generating the output which is a function of the input and hidden state, we update the hidden state itself based on the input and use it in processing the next input.
What is the role of the Activation Function?
The Activation function is used to introduce non-linearity into the neural network helping it to learn more complex function. Without which the neural network would be only able to learn linear function which is a linear combination of its input data. An activation function is a function in an artificial neuron that delivers an output based on inputs.
Auto-encoders are simple learning networks that aim to transform inputs into outputs with the minimum possible error. This means that we want the output to be as close to input as possible. We add a couple of layers between the input and the output, and the sizes of these layers are smaller than the input layer. The auto-encoder receives unlabeled input which is then encoded to reconstruct the input.
An autoencoder is a type of artificial neural network used to learn efficient data coding in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”. Along with the reduction side, a reconstructing side is learnt, where the autoencoder tries to generate from the reduced encoding a representation as close as possible to its original input, hence its name. Several variants exist to the basic model, with the aim of forcing the learned representations of the input to assume useful properties. Autoencoders are effectively used for solving many applied problems, from face recognition to acquiring the semantic meaning of words.
What is a Boltzmann Machine?
Boltzmann machines have a simple learning algorithm that allows them to discover interesting features that represent complex regularities in the training data. The Boltzmann machine is basically used to optimize the weights and the quantity for the given problem. The learning algorithm is very slow in networks with many layers of feature detectors. “Restricted Boltzmann Machines” algorithm has a single layer of feature detectors which makes it faster than the rest.
What Is Dropout and Batch Normalization?
Dropout is a technique of dropping out hidden and visible nodes of a network randomly to prevent overfitting of data (typically dropping 20 per cent of the nodes). It doubles the number of iterations needed to converge the network. It used to avoid overfitting, as it increases the capacity of generalization.
Batch normalization is the technique to improve the performance and stability of neural networks by normalizing the inputs in every layer so that they have mean output activation of zero and standard deviation of one
Why Is TensorFlow the Most Preferred Library in Deep Learning?
TensorFlow provides both C++ and Python APIs, making it easier to work on and has a faster compilation time compared to other Deep Learning libraries like Keras and PyTorch. TensorFlow supports both CPU and GPU computing devices.
What is Tensor in TensorFlow?
A tensor is a mathematical object represented as arrays of higher dimensions. Think of a n-D matrix. These arrays of data with different dimensions and ranks fed as input to the neural network are called “Tensors.”
What is the Computational Graph?
Everything in a TensorFlow is based on creating a computational graph. It has a network of nodes where each node operates. Nodes represent mathematical operations, and edges represent tensors. Since data flows in the form of a graph, it is also called a “DataFlow Graph.”
How is logistic regression done?
Logistic regression measures the relationship between the dependent variable (our label of what we want to predict) and one or more independent variables (our features) by estimating probability using its underlying logistic function (sigmoid).
Explain the steps in making a decision tree.
1. Take the entire data set as input 2. Calculate entropy of the target variable, as well as the predictor attributes 3. Calculate your information gain of all attributes (we gain information on sorting different objects from each other) 4. Choose the attribute with the highest information gain as the root node 5. Repeat the same procedure on every branch until the decision node of each branch is finalized For example, let’s say you want to build a decision tree to decide whether you should accept or decline a job offer. The decision tree for this case is as shown:
It is clear from the decision tree that an offer is accepted if: • Salary is greater than $50,000 • The commute is less than an hour • Coffee is offered
A random forest is built up of a number of decision trees. If you split the data into different packages and make a decision tree in each of the different groups of data, the random forest brings all those trees together.
Steps to build a random forest model:
1. Randomly select ; features from a total of = features where k<< m 2. Among the ; features, calculate the node D using the best split point 3. Split the node into daughter nodes using the best split 4. Repeat steps two and three until leaf nodes are finalized 5. Build forest by repeating steps one to four for > times to create > number of trees
Differentiate between univariate, bivariate, and multivariate analysis.
Univariate data contains only one variable. The purpose of the univariate analysis is to describe the data and find patterns that exist within it.
The patterns can be studied by drawing conclusions using mean, median, mode, dispersion or range, minimum, maximum, etc.
Bivariate data involves two different variables. The analysis of this type of data deals with causes and relationships and the analysis is done to determine the relationship between the two variables.
Here, the relationship is visible from the table that temperature and sales are directly proportional to each other. The hotter the temperature, the better the sales.
Multivariate data involves three or more variables, it is categorized under multivariate. It is similar to a bivariate but contains more than one dependent variable.
Example: data for house price prediction The patterns can be studied by drawing conclusions using mean, median, and mode, dispersion or range, minimum, maximum, etc. You can start describing the data and using it to guess what the price of the house will be.
What are the feature selection methods used to select the right variables?
There are two main methods for feature selection. Filter Methods This involves: • Linear discrimination analysis • ANOVA • Chi-Square The best analogy for selecting features is “bad data in, bad answer out.” When we’re limiting or selecting the features, it’s all about cleaning up the data coming in.
Wrapper Methods This involves: • Forward Selection: We test one feature at a time and keep adding them until we get a good fit • Backward Selection: We test all the features and start removing them to see what works better • Recursive Feature Elimination: Recursively looks through all the different features and how they pair together
Wrapper methods are very labor-intensive, and high-end computers are needed if a lot of data analysis is performed with the wrapper method.
You are given a data set consisting of variables with more than 30 percent missing values. How will you deal with them?
If the data set is large, we can just simply remove the rows with missing data values. It is the quickest way; we use the rest of the data to predict the values.
For smaller data sets, we can impute missing values with the mean, median, or average of the rest of the data using pandas data frame in python. There are different ways to do so, such as: df.mean(), df.fillna(mean)
Other option of imputation is using KNN for numeric or classification values (as KNN just uses k closest values to impute the missing value).
Q76: How will you calculate the Euclidean distance in Python?
plot1 = [1,3]
plot2 = [2,5]
The Euclidean distance can be calculated as follows:
What are dimensionality reduction and its benefits?
Dimensionality reduction refers to the process of converting a data set with vast dimensions into data with fewer dimensions (fields) to convey similar information concisely.
This reduction helps in compressing data and reducing storage space. It also reduces computation time as fewer dimensions lead to less computing. It removes redundant features; for example, there’s no point in storing a value in two different units (meters and inches).
How should you maintain a deployed model?
The steps to maintain a deployed model are (CREM):
1. Monitor: constant monitoring of all models is needed to determine their performance accuracy. When you change something, you want to figure out how your changes are going to affect things. This needs to be monitored to ensure it’s doing what it’s supposed to do. 2. Evaluate: evaluation metrics of the current model are calculated to determine if a new algorithm is needed. 3. Compare: the new models are compared to each other to determine which model performs the best. 4. Rebuild: the best performing model is re-built on the current state of data.
How can a time-series data be declared as stationery?
The mean of the series should not be a function of time.
The variance of the series should not be a function of time. This property is known as homoscedasticity.
The covariance of the i th term and the (i+m) th term should not be a function of time.
‘People who bought this also bought…’ recommendations seen on Amazon are a result of which algorithm?
The recommendation engine is accomplished with collaborative filtering. Collaborative filtering explains the behavior of other users and their purchase history in terms of ratings, selection, etc. The engine makes predictions on what might interest a person based on the preferences of other users. In this algorithm, item features are unknown. For example, a sales page shows that a certain number of people buy a new phone and also buy tempered glass at the same time. Next time, when a person buys a phone, he or she may see a recommendation to buy tempered glass as well.
What is a Generative Adversarial Network?
Suppose there is a wine shop purchasing wine from dealers, which they resell later. But some dealers sell fake wine. In this case, the shop owner should be able to distinguish between fake and authentic wine. The forger will try different techniques to sell fake wine and make sure specific techniques go past the shop owner’s check. The shop owner would probably get some feedback from wine experts that some of the wine is not original. The owner would have to improve how he determines whether a wine is fake or authentic. The forger’s goal is to create wines that are indistinguishable from the authentic ones while the shop owner intends to tell if the wine is real or not accurately.
• There is a noise vector coming into the forger who is generating fake wine. • Here the forger acts as a Generator. • The shop owner acts as a Discriminator. • The Discriminator gets two inputs; one is the fake wine, while the other is the real authentic wine. The shop owner has to figure out whether it is real or fake.
So, there are two primary components of Generative Adversarial Network (GAN) named: 1. Generator 2. Discriminator
The generator is a CNN that keeps keys producing images and is closer in appearance to the real images while the discriminator tries to determine the difference between real and fake images. The ultimate aim is to make the discriminator learn to identify real and fake images.
You are given a dataset on cancer detection. You have built a classification model and achieved an accuracy of 96 percent. Why shouldn’t you be happy with your model performance? What can you do about it?
Cancer detection results in imbalanced data. In an imbalanced dataset, accuracy should not be based as a measure of performance. It is important to focus on the remaining four percent, which represents the patients who were wrongly diagnosed. Early diagnosis is crucial when it comes to cancer detection and can greatly improve a patient’s prognosis.
Hence, to evaluate model performance, we should use Sensitivity (True Positive Rate), Specificity (True Negative Rate), F measure to determine the class wise performance of the classifier.
We want to predict the probability of death from heart disease based on three risk factors: age, gender, and blood cholesterol level. What is the most appropriate algorithm for this case?
The most appropriate algorithm for this case is logistic regression.
After studying the behavior of a population, you have identified four specific individual types that are valuable to your study. You would like to find all users who are most similar to each individual type. Which algorithm is most appropriate for this study?
As we are looking for grouping people together specifically by four different similarities, it indicates the value of k. Therefore, K-means clustering is the most appropriate algorithm for this study.
You have run the association rules algorithm on your dataset, and the two rules {banana, apple} => {grape} and {apple, orange} => {grape} have been found to be relevant. What else must be true?
{grape, apple} must be a frequent itemset.
Your organization has a website where visitors randomly receive one of two coupons. It is also possible that visitors to the website will not receive a coupon. You have been asked to determine if offering a coupon to website visitors has any impact on their purchase decisions. Which analysis method should you use?
One-way ANOVA: in statistics, one-way analysis of variance is a technique that can be used to compare means of two or more samples. This technique can be used only for numerical response data, the “Y”, usually one variable, and numerical or categorical input data, the “X”, always one variable, hence “oneway”. The ANOVA tests the null hypothesis, which states that samples in all groups are drawn from populations with the same mean values. To do this, two estimates are made of the population variance. The ANOVA produces an F-statistic, the ratio of the variance calculated among the means to the variance within the samples. If the group means are drawn from populations with the same mean values, the variance between the group means should be lower than the variance of the samples, following the central limit theorem. A higher ratio therefore implies that the samples were drawn from populations with different mean values.
What are the feature vectors?
A feature vector is an n-dimensional vector of numerical features that represent an object. In machine learning, feature vectors are used to represent numeric or symbolic characteristics (called features) of an object in a mathematical way that’s easy to analyze.
What is root cause analysis?
Root cause analysis was initially developed to analyze industrial accidents but is now widely used in other areas. It is a problem-solving technique used for isolating the root causes of faults or problems. A factor is called a root cause if its deduction from the problem-fault-sequence averts the final undesirable event from recurring.
Do gradient descent methods always converge to similar points?
They do not, because in some cases, they reach a local minimum or a local optimum point. You would not reach the global optimum point. This is governed by the data and the starting conditions.
• PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license. • TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team. • Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph. • Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.
What are the different Deep Learning Frameworks?
• PyTorch: PyTorch is an open source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook’s AI Research lab. It is free and open-source software released under the Modified BSD license. • TensorFlow: TensorFlow is a free and open-source software library for dataflow and differentiable programming across a range of tasks. It is a symbolic math library and is also used for machine learning applications such as neural networks. Licensed by Apache License 2.0. Developed by Google Brain Team. • Microsoft Cognitive Toolkit: Microsoft Cognitive Toolkit describes neural networks as a series of computational steps via a directed graph. • Keras: Keras is an open-source neural-network library written in Python. It is capable of running on top of TensorFlow, Microsoft Cognitive Toolkit, R, Theano, or PlaidML. Designed to enable fast experimentation with deep neural networks, it focuses on being user-friendly, modular, and extensible. Licensed by MIT.
Long-Short-Term Memory (LSTM) is a special kind of recurrent neural network capable of learning long-term dependencies, remembering information for long periods as its default behavior. There are three steps in an LSTM network: • Step 1: The network decides what to forget and what to remember. • Step 2: It selectively updates cell state values. • Step 3: The network decides what part of the current state makes it to the output.
As in Neural Networks, MLPs have an input layer, a hidden layer, and an output layer. It has the same structure as a single layer perceptron with one or more hidden layers.
Perceptron is a single layer neural network and a multi-layer perceptron is called Neural Networks. A (single layer) perceptron is a single layer neural network that works as a linear binary classifier. Being a single layer neural network, it can be trained without the use of more advanced algorithms like back propagation and instead can be trained by “stepping towards” your error in steps specified by a learning rate. When someone says perceptron, I usually think of the single layer version.
While training an RNN, if you see exponentially growing (very large) error gradients which accumulate and result in very large updates to neural network model weights during training, they’re known as exploding gradients. At an extreme, the values of weights can become so large as to overflow and result in NaN values. The explosion occurs through exponential growth by repeatedly multiplying gradients through the network layers that have values larger than 1.0. This has the effect of your model is unstable and unable to learn from your training data. There are some subtle signs that you may be suffering from exploding gradients during the training of your network, such as: • The model is unable to get traction on your training data (e.g. poor loss). • The model is unstable, resulting in large changes in loss from update to update. • The model loss goes to NaN during training. • The model weights quickly become very large during training. • The error gradient values are consistently above 1.0 for each node and layer during training.
Solutions 1. Re-Design the Network Model: a. In deep neural networks, exploding gradients may be addressed by redesigning the network to have fewer layers. There may also be some benefit in using a smaller batch size while training the network. b. In RNNs, updating across fewer prior time steps during training, called truncated Backpropagation through time, may reduce the exploding gradient problem.
2. Use Long Short-Term Memory Networks: In RNNs, exploding gradients can be reduced by using the Long Short-Term Memory (LSTM) memory units and perhaps related gated-type neuron structures. Adopting LSTM memory units is a new best practice for recurrent neural networks for sequence prediction.
3. Use Gradient Clipping: Exploding gradients can still occur in very deep Multilayer Perceptron networks with a large batch size and LSTMs with very long input sequence lengths. If exploding gradients are still occurring, you can check for and limit the size of gradients during the training of your network. This is called gradient clipping. Specifically, the values of the error gradient are checked against a threshold value and clipped or set to that threshold value if the error gradient exceeds the threshold.
4. Use Weight Regularization: another approach, if exploding gradients are still occurring, is to check the size of network weights and apply a penalty to the networks loss function for large weight values. This is called weight regularization and often an L1 (absolute weights) or an L2 (squared weights) penalty can be used.
What is vanishing gradients?
While training an RNN, your slope can become either too small; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. It leads to long training times, poor performance, and low accuracy. • Hyperbolic tangent and Sigmoid/Soft-max suffer vanishing gradient. • RNNs suffer vanishing gradient, LSTM no (so it is perfect to predict stock prices). In fact, the propagation of error through previous layers makes the gradient get smaller so the weights are not updated.
Solutions 1. Choose RELU 2. Use LSTM (for RNNs) 3. Use ResNet (Residual Network) → after some layers, add x again: F(x) → ⋯ → F(x) + x 4. Multi-level hierarchy: pre-train one layer at the time through unsupervised learning, then fine-tune via backpropagation 5. Gradient checking: debugging strategy used to numerically track and assess gradients during training.
What is Gradient Descent?
Let’s first explain what a gradient is. A gradient is a mathematical function. When calculated on a point of a function, it gives the hyperplane (or slope) of the directions in which the function increases more. The gradient vector can be interpreted as the “direction and rate of fastest increase”. If the gradient of a function is non-zero at a point p, the direction of the gradient is the direction in which the function increases most quickly from p, and the magnitude of the gradient is the rate of increase in that direction. Further, the gradient is the zero vector at a point if and only if it is a stationary point (where the derivative vanishes). In Data Science, it simply measures the change in all weights with regard to the change in error, as we are partially derivating by w the loss function.
Gradient descent is a first-order iterative optimization algorithm for finding the minimum of a function.
The goal of the gradient descent is to minimize a given function which, in our case, is the loss function of the neural network. To achieve this goal, it performs two steps iteratively. 1. Compute the slope (gradient) that is the first-order derivative of the function at the current point 2. Move-in the opposite direction of the slope increase from the current point by the computed amount So, the idea is to pass the training set through the hidden layers of the neural network and then update the parameters of the layers by computing the gradients using the training samples from the training dataset. Think of it like this. Suppose a man is at top of the valley and he wants to get to the bottom of the valley. So, he goes down the slope. He decides his next position based on his current position and stops when he gets to the bottom of the valley which was his goal.
• Gradient descent is an iterative optimization algorithm that is popular and it is a base for many other optimization techniques, which tries to obtain minimal loss in a model by tuning the weights/parameters in the objective function.
• Types of Gradient Descent:
Batch Gradient Descent
Stochastic Gradient Descent
Mini Batch Gradient Descent
• Steps to achieve minimal loss:
The first stage in gradient descent is to pick a starting value (a starting point) for w1, which is set to 0 by many algorithms.
The gradient descent algorithm then calculates the gradient of the loss curve at the starting point.
The gradient always points in the direction of steepest increase in the loss function. The gradient descent algorithm takes a step in the direction of the negative gradient in order to reduce loss as quickly as possible.
To determine the next point along the loss function curve, the gradient descent algorithm adds some fraction of the gradient’s magnitude to the starting point and moves forward.
The gradient descent then repeats this process, edging ever closer to the minimum.
What is vanishing gradients?
While training an RNN, your slope can become either too small; this makes the training difficult. When the slope is too small, the problem is known as a Vanishing Gradient. It leads to long training times, poor performance, and low accuracy. • Hyperbolic tangent and Sigmoid/Soft-max suffer vanishing gradient. • RNNs suffer vanishing gradient, LSTM no (so it is perfect to predict stock prices). In fact, the propagation of error through previous layers makes the gradient get smaller so the weights are not updated.
Solutions 1. Choose RELU 2. Use LSTM (for RNNs) 3. Use ResNet (Residual Network) → after some layers, add x again: F(x) → ⋯ → F(x) + x 4. Multi-level hierarchy: pre-train one layer at the time through unsupervised learning, then fine-tune via backpropagation 5. Gradient checking: debugging strategy used to numerically track and assess gradients during training.
What is Back Propagation and Explain it Works.
Back propagation is a training algorithm used for neural network. In this method, we update the weights of each layer from the last layer recursively, with the formula:
It has the following steps: • Forward Propagation of Training Data (initializing weights with random or pre-assigned values) • Gradients are computed using output weights and target • Back Propagate for computing gradients of error from output activation • Update the Weights
Stochastic Gradient Descent: In Batch Gradient Descent we were considering all the examples for every step of Gradient Descent. But what if our dataset is very huge. Deep learning models crave for data. The more the data the more chances of a model to be good. Suppose our dataset has 5 million examples, then just to take one step the model will have to calculate the gradients of all the 5 million examples. This does not seem an efficient way. To tackle this problem, we have Stochastic Gradient Descent. In Stochastic Gradient Descent (SGD), we consider just one example at a time to take a single step. We do the following steps in one epoch for SGD: 1. Take an example 2. Feed it to Neural Network 3. Calculate its gradient 4. Use the gradient we calculated in step 3 to update the weights 5. Repeat steps 1–4 for all the examples in training dataset Since we are considering just one example at a time the cost will fluctuate over the training examples and it will not necessarily decrease. But in the long run, you will see the cost decreasing with fluctuations. Also, because the cost is so fluctuating, it will never reach the minimum, but it will keep dancing around it. SGD can be used for larger datasets. It converges faster when the dataset is large as it causes updates to the parameters more frequently.
Stochastic Gradient Descent (SGD)
Batch Gradient Descent: all the training data is taken into consideration to take a single step. We take the average of the gradients of all the training examples and then use that mean gradient to update our parameters. So that’s just one step of gradient descent in one epoch. Batch Gradient Descent is great for convex or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution. The graph of cost vs epochs is also quite smooth because we are averaging over all the gradients of training data for a single step. The cost keeps on decreasing over the epochs.
Mini-batch Gradient Descent: It’s one of the most popular optimization algorithms. It’s a variant of Stochastic Gradient Descent and here instead of single training example, mini batch of samples is used. Batch Gradient Descent can be used for smoother curves. SGD can be used when the dataset is large. Batch Gradient Descent converges directly to minima. SGD converges faster for larger datasets. But, since in SGD we use only one example at a time, we cannot implement the vectorized implementation on it. This can slow down the computations. To tackle this problem, a mixture of Batch Gradient Descent and SGD is used. Neither we use all the dataset all at once nor we use the single example at a time. We use a batch of a fixed number of training examples which is less than the actual dataset and call it a mini-batch. Doing this helps us achieve the advantages of both the former variants we saw. So, after creating the mini-batches of fixed size, we do the following steps in one epoch: 1. Pick a mini-batch 2. Feed it to Neural Network 3. Calculate the mean gradient of the mini-batch 4. Use the mean gradient we calculated in step 3 to update the weights 5. Repeat steps 1–4 for the mini-batches we created Just like SGD, the average cost over the epochs in mini-batch gradient descent fluctuates because we are averaging a small number of examples at a time. So, when we are using the mini-batch gradient descent we are updating our parameters frequently as well as we can use vectorized implementation for faster computations.
While we continue to integrate ML systems in high-stakes environments such as medical settings, roads, command control centers, we need to ensure they do not cause the loss of life. How can you handle this?
By focusing on the following, which includes everything outside of just developing SOTA models, as well inclusion of key stakeholders.
🔹Robustness: Create models that are resilient to adversaries, unusual situations, and Black Swan events
🔹Monitoring: Detect malicious use, monitor predictions, and discover unexpected model functionality
🔹Alignment: Build models that represent and safely optimize hard-to-specify human values
🔹External Safety: Use ML to address risks to how ML systems are handled, such as cyber attacks
You are given a data set. The data set has missing values that spread along 1 standard deviation from the median. What percentage of data would remain unaffected? Why?
Since the data is spread across the median, let’s assume it’s a normal distribution. We know, in a normal distribution, ~68% of the data lies in 1 standard deviation from mean (or mode, median), which leaves ~32% of the data unaffected. Therefore, ~32% of the data would remain unaffected by missing values.
What are PCA, KPCA, and ICA used for?
PCA (Principal Components Analysis), KPCA ( Kernel-based Principal Component Analysis) and ICA ( Independent Component Analysis) are important feature extraction techniques used for dimensionality reduction.
What is the bias-variance decomposition of classification error in the ensemble method?
The expected error of a learning algorithm can be decomposed into bias and variance. A bias term measures how closely the average classifier produced by the learning algorithm matches the target function. The variance term measures how much the learning algorithm’s prediction fluctuates for different training sets.
When is Ridge regression favorable over Lasso regression?
You can quote ISLR’s authors Hastie, Tibshirani who asserted that, in the presence of few variables with medium / large sized effect, use lasso regression. In presence of many variables with small/medium-sized effects, use ridge regression. Conceptually, we can say, lasso regression (L1) does both variable selection and parameter shrinkage, whereas Ridge regression only does parameter shrinkage and end up including all the coefficients in the model. In the presence of correlated variables, ridge regression might be the preferred choice. Also, ridge regression works best in situations where the least square estimates have higher variance. Therefore, it depends on our model objective.
You’ve built a random forest model with 10000 trees. You got delighted after getting training error as 0.00. But, the validation error is 34.23. What is going on? Haven’t you trained your model perfectly?
The model has overfitted. Training error 0.00 means the classifier has mimicked the training data patterns to an extent, that they are not available in the unseen data. Hence, when this classifier was run on an unseen sample, it couldn’t find those patterns and returned predictions with higher error. In a random forest, it happens when we use a larger number of trees than necessary. Hence, to avoid this situation, we should tune the number of trees using cross-validation.
What is a convex hull?
In the case of linearly separable data, the convex hull represents the outer boundaries of the two groups of data points. Once the convex hull is created, we get maximum margin hyperplane (MMH) as a perpendicular bisector between two convex hulls. MMH is the line which attempts to create the greatest separation between two groups.
What do you understand by Type I vs Type II error?
Type I error is committed when the null hypothesis is true and we reject it, also known as a ‘False Positive’. Type II error is committed when the null hypothesis is false and we accept it, also known as ‘False Negative’. In the context of the confusion matrix, we can say Type I error occurs when we classify a value as positive (1) when it is actually negative (0). Type II error occurs when we classify a value as negative (0) when it is actually positive(1).
In k-means or kNN, we use euclidean distance to calculate the distance between nearest neighbors. Why not manhattan distance?
We don’t use manhattan distance because it calculates distance horizontally or vertically only. It has dimension restrictions. On the other hand, the euclidean metric can be used in any space to calculate distance. Since the data points can be present in any dimension, euclidean distance is a more viable option.
Example: Think of a chessboard, the movement made by a bishop or a rook is calculated by manhattan distance because of their respective vertical & horizontal movements.
Do you suggest that treating a categorical variable as a continuous variable would result in a better predictive model?
For better predictions, the categorical variable can be considered as a continuous variable only when the variable is ordinal in nature.
OLS is to linear regression what the maximum likelihood is logistic regression. Explain the statement.
OLS and Maximum likelihood are the methods used by the respective regression methods to approximate the unknown parameter (coefficient) value. In simple words, Ordinary least square(OLS) is a method used in linear regression which approximates the parameters resulting in minimum distance between actual and predicted values. Maximum Likelihood helps in choosing the values of parameters which maximizes the likelihood that the parameters are most likely to produce observed data.
When does regularization becomes necessary in Machine Learning?
Regularization becomes necessary when the model begins to overfit/underfit. This technique introduces a cost term for bringing in more features with the objective function. Hence, it tries to push the coefficients for many variables to zero and hence reduce the cost term. This helps to reduce model complexity so that the model can become better at predicting (generalizing).
Linear Regression is a supervised Machine Learning algorithm. It is used to find the linear relationship between the dependent and the independent variables for predictive analysis.
• Linear regression assumes that the relationship between the features and the target vector is approximately linear. That is, the effect of the features on the target vector is constant.
• In linear regression, the target variable y is assumed to follow a linear function of one or more predictor variables plus some random error. The machine learning task is to estimate the parameters of this equation which can be achieved in two ways:
• The first approach is through the lens of minimizing loss. A common practice in machine learning is to choose a loss function that defines how well a model with a given set of parameters estimates the observed data. The most common loss function for linear regression is squared error loss.
• The second approach is through the lens of maximizing the likelihood. Another common practice in machine learning is to model the target as a random variable whose distribution depends on one or more parameters, and then find the parameters that maximize its likelihood.
Variance Inflation Factor (VIF) is the estimate of the volume of multicollinearity in a collection of many regression variables. VIF = Variance of the model / Variance of the model with a single independent variable We have to calculate this ratio for every independent variable. If VIF is high, then it shows the high collinearity of the independent variables.
We know that one hot encoding increases the dimensionality of a dataset, but label encoding doesn’t. How?
When we use one-hot encoding, there is an increase in the dimensionality of a dataset. The reason for the increase in dimensionality is that, for every class in the categorical variables, it forms a different variable.
What is a Decision Tree?
A decision tree is used to explain the sequence of actions that must be performed to get the desired output. It is a hierarchical diagram that shows the actions.
What is the Binarizing of data? How to Binarize?
In most of the Machine Learning Interviews, apart from theoretical questions, interviewers focus on the implementation part. So, this ML Interview Questions focused on the implementation of the theoretical concepts. Converting data into binary values on the basis of threshold values is known as the binarizing of data. The values that are less than the threshold are set to 0 and the values that are greater than the threshold are set to 1. This process is useful when we have to perform feature engineering, and we can also use it for adding unique features.
What is cross-validation?
Cross-validation is essentially a technique used to assess how well a model performs on a new independent dataset. The simplest example of cross-validation is when you split your data into two groups: training data and testing data, where you use the training data to build the model and the testing data to test the model.
• Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation.
• Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.
• It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.
• Procedure for K-Fold Cross Validation: 1. Shuffle the dataset randomly. 2. Split the dataset into k groups
3. For each unique group: a. Take the group as a holdout or test data set b. Take the remaining groups as a training data set c. Fit a model on the training set and evaluate it on the test set d. Retain the evaluation score and discard the model
4. Summarize the skill of the model using the sample of model evaluation scores
There are a couple of reasons why a random forest is a better choice of the model than a support vector machine: ● Random forests allow you to determine the feature importance. SVM’s can’t do this. ● Random forests are much quicker and simpler to build than an SVM. ● For multi-class classification problems, SVMs require a one-vs-rest method, which is less scalable and more memory intensive.
What are the drawbacks of a linear model?
There are a couple of drawbacks of a linear model: ● A linear model holds some strong assumptions that may not be true in the application. It assumes a linear relationship, multivariate normality, no or little multicollinearity, no auto-correlation, and homoscedasticity ● A linear model can’t be used for discrete or binary outcomes. ● You can’t vary the model flexibility of a linear model.
While we continue to integrate ML systems in high-stakes environments such as medical settings, roads, command control centers, we need to ensure they do not cause the loss of life. How can you handle this?
By focusing on the following, which includes everything outside of just developing SOTA models, as well inclusion of key stakeholders.
🔹Robustness: Create models that are resilient to adversaries, unusual situations, and Black Swan events
🔹Monitoring: Detect malicious use, monitor predictions, and discover unexpected model functionality
🔹Alignment: Build models that represent and safely optimize hard-to-specify human values
🔹External Safety: Use ML to address risks to how ML systems are handled, such as cyber attacks
You are given a data set. The data set has missing values that spread along 1 standard deviation from the median. What percentage of data would remain unaffected? Why?
Since the data is spread across the median, let’s assume it’s a normal distribution. We know, in a normal distribution, ~68% of the data lies in 1 standard deviation from mean (or mode, median), which leaves ~32% of the data unaffected. Therefore, ~32% of the data would remain unaffected by missing values.
PCA (Principal Components Analysis), KPCA ( Kernel-based Principal Component Analysis) and ICA ( Independent Component Analysis) are important feature extraction techniques used for dimensionality reduction.
What are support vector machines?
Support vector machines are supervised learning algorithms used for classification and regression analysis.
What is batch statistical learning?
Statistical learning techniques allow learning a function or predictor from a set of observed data that can make predictions about unseen or future data. These techniques provide guarantees on the performance of the learned predictor on the future unseen data based on a statistical assumption on the data generating process.
What is the bias-variance decomposition of classification error in the ensemble method?
The expected error of a learning algorithm can be decomposed into bias and variance. A bias term measures how closely the average classifier produced by the learning algorithm matches the target function. The variance term measures how much the learning algorithm’s prediction fluctuates for different training sets.
When is Ridge regression favorable over Lasso regression?
You can quote ISLR’s authors Hastie, Tibshirani who asserted that, in the presence of few variables with medium / large sized effect, use lasso regression. In presence of many variables with small/medium-sized effects, use ridge regression. Conceptually, we can say, lasso regression (L1) does both variable selection and parameter shrinkage, whereas Ridge regression only does parameter shrinkage and end up including all the coefficients in the model. In the presence of correlated variables, ridge regression might be the preferred choice. Also, ridge regression works best in situations where the least square estimates have higher variance. Therefore, it depends on our model objective.
You’ve built a random forest model with 10000 trees. You got delighted after getting training error as 0.00. But, the validation error is 34.23. What is going on? Haven’t you trained your model perfectly?
The model has overfitted. Training error 0.00 means the classifier has mimicked the training data patterns to an extent, that they are not available in the unseen data. Hence, when this classifier was run on an unseen sample, it couldn’t find those patterns and returned predictions with higher error. In a random forest, it happens when we use a larger number of trees than necessary. Hence, to avoid this situation, we should tune the number of trees using cross-validation.
What is a convex hull?
In the case of linearly separable data, the convex hull represents the outer boundaries of the two groups of data points. Once the convex hull is created, we get maximum margin hyperplane (MMH) as a perpendicular bisector between two convex hulls. MMH is the line which attempts to create the greatest separation between two groups.
What do you understand by Type I vs Type II error?
Type I error is committed when the null hypothesis is true and we reject it, also known as a ‘False Positive’. Type II error is committed when the null hypothesis is false and we accept it, also known as ‘False Negative’. In the context of the confusion matrix, we can say Type I error occurs when we classify a value as positive (1) when it is actually negative (0). Type II error occurs when we classify a value as negative (0) when it is actually positive(1).
In k-means or kNN, we use euclidean distance to calculate the distance between nearest neighbors. Why not manhattan distance?
We don’t use manhattan distance because it calculates distance horizontally or vertically only. It has dimension restrictions. On the other hand, the euclidean metric can be used in any space to calculate distance. Since the data points can be present in any dimension, euclidean distance is a more viable option.
Example: Think of a chessboard, the movement made by a bishop or a rook is calculated by manhattan distance because of their respective vertical & horizontal movements.
Do you suggest that treating a categorical variable as a continuous variable would result in a better predictive model?
For better predictions, the categorical variable can be considered as a continuous variable only when the variable is ordinal in nature.
OLS is to linear regression wha the maximum likelihood is logistic regression. Explain the statement.
OLS and Maximum likelihood are the methods used by the respective regression methods to approximate the unknown parameter (coefficient) value. In simple words, Ordinary least square(OLS) is a method used in linear regression which approximates the parameters resulting in minimum distance between actual and predicted values. Maximum Likelihood helps in choosing the values of parameters which maximizes the likelihood that the parameters are most likely to produce observed data.
When does regularization becomes necessary in Machine Learning?
Regularization becomes necessary when the model begins to overfit/underfit. This technique introduces a cost term for bringing in more features with the objective function. Hence, it tries to push the coefficients for many variables to zero and hence reduce the cost term. This helps to reduce model complexity so that the model can become better at predicting (generalizing).
Linear Regression is a supervised Machine Learning algorithm. It is used to find the linear relationship between the dependent and the independent variables for predictive analysis.
What is the Variance Inflation Factor?
Variance Inflation Factor (VIF) is the estimate of the volume of multicollinearity in a collection of many regression variables. VIF = Variance of the model / Variance of the model with a single independent variable We have to calculate this ratio for every independent variable. If VIF is high, then it shows the high collinearity of the independent variables.
We know that one hot encoding increases the dimensionality of a dataset, but label encoding doesn’t. How?
When we use one-hot encoding, there is an increase in the dimensionality of a dataset. The reason for the increase in dimensionality is that, for every class in the categorical variables, it forms a different variable.
What is a Decision Tree?
A decision tree is used to explain the sequence of actions that must be performed to get the desired output. It is a hierarchical diagram that shows the actions.
What is the Binarizing of data? How to Binarize?
In most of the Machine Learning Interviews, apart from theoretical questions, interviewers focus on the implementation part. So, this ML Interview Questions focused on the implementation of the theoretical concepts. Converting data into binary values on the basis of threshold values is known as the binarizing of data. The values that are less than the threshold are set to 0 and the values that are greater than the threshold are set to 1. This process is useful when we have to perform feature engineering, and we can also use it for adding unique features.
What is cross-validation?
Cross-validation is essentially a technique used to assess how well a model performs on a new independent dataset. The simplest example of cross-validation is when you split your data into two groups: training data and testing data, where you use the training data to build the model and the testing data to test the model.
When would you use random forests Vs SVM and why?
There are a couple of reasons why a random forest is a better choice of the model than a support vector machine: ● Random forests allow you to determine the feature importance. SVM’s can’t do this. ● Random forests are much quicker and simpler to build than an SVM. ● For multi-class classification problems, SVMs require a one-vs-rest method, which is less scalable and more memory intensive.
What are the drawbacks of a linear model?
There are a couple of drawbacks of a linear model: ● A linear model holds some strong assumptions that may not be true in the application. It assumes a linear relationship, multivariate normality, no or little multicollinearity, no auto-correlation, and homoscedasticity ● A linear model can’t be used for discrete or binary outcomes. ● You can’t vary the model flexibility of a linear model.
Do you think 50 small decision trees are better than a large one? Why?
Another way of asking this question is “Is a random forest a better model than a decision tree?” And the answer is yes because a random forest is an ensemble method that takes many weak decision trees to make a strong learner. Random forests are more accurate, more robust, and less prone to overfitting.
What is a kernel? Explain the kernel trick
A kernel is a way of computing the dot product of two vectors x and y in some (possibly very high dimensional) feature space, which is why kernel functions are sometimes called “generalized dot product” The kernel trick is a method of using a linear classifier to solve a non-linear problem by transforming linearly inseparable data to linearly separable ones in a higher dimension.
State the differences between causality and correlation?
Causality applies to situations where one action, say X, causes an outcome, say Y, whereas Correlation is just relating one action (X) to another action(Y) but X does not necessarily cause Y.
What is the exploding gradient problem while using the backpropagation technique?
When large error gradients accumulate and result in large changes in the neural network weights during training, it is called the exploding gradient problem. The values of weights can become so large as to overflow and result in NaN values. This makes the model unstable and the learning of the model to stall just like the vanishing gradient problem.
What do you mean by Associative Rule Mining (ARM)?
Associative Rule Mining is one of the techniques to discover patterns in data like features (dimensions) which occur together and features (dimensions) which are correlated.
What is Marginalization? Explain the process.
Marginalization is summing the probability of a random variable X given the joint probability distribution of X with other variables. It is an application of the law of total probability.
Why is the rotation of components so important in Principle Component Analysis(PCA)?
Rotation in PCA is very important as it maximizes the separation within the variance obtained by all the components because of which interpretation of components would become easier. If the components are not rotated, then we need extended components to describe the variance of the components.
What is the difference between regularization and normalization?
Normalization adjusts the data; regularization adjusts the prediction function. If your data is on very different scales (especially low to high), you would want to normalize the data. Alter each column to have compatible basic statistics. This can be helpful to make sure there is no loss of accuracy. One of the goals of model training is to identify the signal and ignore the noise if the model is given free rein to minimize error, there is a possibility of suffering from overfitting. Regularization imposes some control on this by providing simpler fitting functions over complex ones.
How does the SVM algorithm deal with self-learning?
SVM has a learning rate and expansion rate which takes care of this. The learning rate compensates or penalizes the hyperplanes for making all the wrong moves and expansion rate deals with finding the maximum separation area between classes.
How do you handle outliers in the data?
Outlier is an observation in the data set that is far away from other observations in the data set. We can discover outliers using tools and functions like box plot, scatter plot, Z-Score, IQR score etc. and then handle them based on the visualization we have got. To handle outliers, we can cap at some threshold, use transformations to reduce skewness of the data and remove outliers if they are anomalies or errors.
What are some techniques used to find similarities in the recommendation system?
Pearson correlation and Cosine correlation are techniques used to find similarities in recommendation systems.
Why would you Prune your tree?
In the context of data science or AIML, pruning refers to the process of reducing redundant branches of a decision tree. Decision Trees are prone to overfitting, pruning the tree helps to reduce the size and minimizes the chances of overfitting. Pruning involves turning branches of a decision tree into leaf nodes and removing the leaf nodes from the original branch. It serves as a tool to perform the tradeoff.
What are some of the EDA Techniques?
Exploratory Data Analysis (EDA) helps analysts to understand the data better and forms the foundation of better models. Visualization ● Univariate visualization ● Bivariate visualization ● Multivariate visualization Missing Value Treatment – Replace missing values with Either Mean/Median Outlier Detection – Use Boxplot to identify the distribution of Outliers, then Apply IQR to set the boundary for IQR
What is data augmentation?
Data augmentation is a technique for synthesizing new data by modifying existing data in such a way that the target is not changed, or it is changed in a known way. CV is one of the fields where data augmentation is very useful. There are many modifications that we can do to images: ● Resize ● Horizontal or vertical flip ● Rotate ● Add noise ● Deform ● Modify colors Each problem needs a customized data augmentation pipeline. For example, on OCR, doing flips will change the text and won’t be beneficial; however, resizes and small rotations may help.
What is Inductive Logic Programming in Machine Learning (ILP)?
Inductive Logic Programming (ILP) is a subfield of machine learning which uses logic programming representing background knowledge and examples.
What is the difference between inductive machine learning and deductive machine learning?
The difference between inductive machine learning and deductive machine learning are as follows: machine-learning where the model learns by examples from a set of observed instances to draw a generalized conclusion whereas in deductive learning the model first draws the conclusion and then the conclusion is drawn.
What is the Difference between machine learning and deep learning?
Machine learning is a branch of computer science and a method to implement artificial intelligence. This technique provides the ability to automatically learn and improve from experiences without being explicitly programmed. Deep learning can be said as a subset of machine learning. It is mainly based on the artificial neural network where data is taken as an input and the technique makes intuitive decisions using the artificial neural network.
What Are The Steps Involved In Machine Learning Project?
As you plan for doing a machine learning project. There are several important steps you must follow to achieve a good working model and they are data collection, data preparation, choosing a machine learning model, training the model, model evaluation, parameter tuning and lastly prediction.
What are Differences between Artificial Intelligence and Machine Learning?
Artificial intelligence is a broader prospect than machine learning. Artificial intelligence mimics the cognitive functions of the human brain. The purpose of AI is to carry out a task in an intelligent manner based on algorithms. On the other hand, machine learning is a subclass of artificial intelligence. To develop an autonomous machine in such a way so that it can learn without being explicitly programmed is the goal of machine learning.
What are the steps Needed to choose the Appropriate Machine Learning Algorithm for your Classification problem?
Firstly, you need to have a clear picture of your data, your constraints, and your problems before heading towards different machine learning algorithms. Secondly, you have to understand which type and kind of data you have because it plays a primary role in deciding which algorithm you have to use.
Following this step is the data categorization step, which is a two-step process – categorization by input and categorization by output. The next step is to understand your constraints; that is, what is your data storage capacity? How fast the prediction has to be? etc.
Finally, find the available machine learning algorithms and implement them wisely. Along with that, also try to optimize the hyperparameters which can be done in three ways – grid search, random search, and Bayesian optimization.
What is the Convex Function?
A convex function is a continuous function, and the value of the midpoint at every interval in its given domain is less than the numerical mean of the values at the two ends of the interval.
What’s the Relationship between True Positive Rate and Recall?
The True positive rate in machine learning is the percentage of the positives that have been properly acknowledged, and recall is just the count of the results that have been correctly identified and are relevant. Therefore, they are the same things, just having different names. It is also known as sensitivity.
What are some tools for parallelizing Machine Learning Algorithms?
Almost all machine learning algorithms are easy to serialize. Some of the basic tools for parallelizing are Matlab, Weka, R, Octave, or the Python-based sci-kit learn.
What is meant by Genetic Programming?
Genetic Programming (GP) is almost similar to an Evolutionary Algorithm, a subset of machine learning. Genetic programming software systems implement an algorithm that uses random mutation, a fitness function, crossover, and multiple generations of evolution to resolve a user-defined task. The genetic programming model is based on testing and choosing the best option among a set of results.
What is meant by Bayesian Networks?
Bayesian Networks also referred to as ‘belief networks’ or ‘casual networks’, are used to represent the graphical model for probability relationship among a set of variables. For example, a Bayesian network can be used to represent the probabilistic relationships between diseases and symptoms. As per the symptoms, the network can also compute the probabilities of the presence of various diseases. Efficient algorithms can perform inference or learning in Bayesian networks. Bayesian networks which relate the variables (e.g., speech signals or protein sequences) are called dynamic Bayesian networks.
Which are the two components of the Bayesian logic program?
A Bayesian logic program consists of two components: ● Logical It contains a set of Bayesian Clauses, which capture the qualitative structure of the domain. ● Quantitative It is used to encode quantitative information about the domain.
How is machine learning used in day-to-day life?
Most of the people are already using machine learning in their everyday life. Assume that you are engaging with the internet, you are actually expressing your preferences, likes, dislikes through your searches. All these things are picked up by cookies coming on your computer, from this, the behavior of a user is evaluated. It helps to increase the progress of a user through the internet and provide similar suggestions. The navigation system can also be considered as one of the examples where we are using machine learning to calculate a distance between two places using optimization techniques.
What is Sampling. Why do we need it?
Sampling is a process of choosing a subset from a target population that would serve as its representative. We use the data from the sample to understand the pattern in the community as a whole. Sampling is necessary because often, we can not gather or process the complete data within a reasonable time.
What does the term decision boundary mean?
A decision boundary or a decision surface is a hypersurface which divides the underlying feature space into two subspaces, one for each class. If the decision boundary is a hyperplane, then the classes are linearly separable.
Define entropy?
Entropy is the measure of uncertainty associated with random variable Y. It is the expected number of bits required to communicate the value of the variable.
Indicate the top intents of machine learning?
The top intents of machine learning are stated below, ● The system gets information from the already established computations to give well-founded decisions and outputs. ● It locates certain patterns in the data and then makes certain predictions on it to provide answers on matters.
Highlight the differences between the Generative model and the Discriminative model?
The aim of the Generative model is to generate new samples from the same distribution and new data instances, Whereas, the Discriminative model highlights the differences between different kinds of data instances. It tries to learn directly from the data and then classifies the data.
Identify the most important aptitudes of a machine learning engineer?
Machine learning allows the computer to learn itself without being decidedly programmed. It helps the system to learn from experience and then improve from its mistakes. The intelligence system, which is based on machine learning, can learn from recorded data and past incidents. In-depth knowledge of statistics, probability, data modelling, programming language, as well as CS, Application of ML Libraries and algorithms, and software design is required to become a successful machine learning engineer.
What is feature engineering? How do you apply it in the process of modelling?
Feature engineering is the process of transforming raw data into features that better represent the underlying problem to the predictive models, resulting in improved model accuracy on unseen data.
How can learning curves help create a better model?
Learning curves give the indication of the presence of overfitting or underfitting. In a learning curve, the training error and cross-validating error are plotted against the number of training data points.
Perception: Vision, Audio, Speech, Natural Language
NLP: TF-IDF helps you to establish what?
TFIDF helps to establish how important a particular word is in the context of the document corpus. TF-IDF takes into account the number of times the word appears in the document and offset by the number of documents that appear in the corpus.
– TF is the frequency of term divided by a total number of terms in the document.
– IDF is obtained by dividing the total number of documents by the number of documents containing the term and then taking the logarithm of that quotient.
– Tf.idf is then the multiplication of two values TF and IDF
List 10 use cases to be solved using NLP techniques?
● Sentiment Analysis ● Language Translation (English to German, Chinese to English, etc..) ● Document Summarization ● Question Answering ● Sentence Completion ● Attribute extraction (Key information extraction from the documents) ● Chatbot interactions ● Topic classification ● Intent extraction ● Grammar or Sentence correction ● Image captioning ● Document Ranking ● Natural Language inference
Which NLP model gives the best accuracy amongst the following: BERT, XLNET, GPT-2, ELMo
XLNET has given best accuracy amongst all the models. It has outperformed BERT on 20 tasks and achieves state of art results on 18 tasks including sentiment analysis, question answering, natural language inference, etc.
What is Naive Bayes algorithm, When we can use this algorithm in NLP?
Naive Bayes algorithm is a collection of classifiers which works on the principles of the Bayes’theorem. This series of NLP model forms a family of algorithms that can be used for a wide range of classification tasks including sentiment prediction, filtering of spam, classifying documents and more. Naive Bayes algorithm converges faster and requires less training data. Compared to other discriminative models like logistic regression, Naive Bayes model takes lesser time to train. This algorithm is perfect for use while working with multiple classes and text classification where the data is dynamic and changes frequently.
Explain Dependency Parsing in NLP?
Dependency Parsing, also known as Syntactic parsing in NLP is a process of assigning syntactic structure to a sentence and identifying its dependency parses. This process is crucial to understand the correlations between the “head” words in the syntactic structure. The process of dependency parsing can be a little complex considering how any sentence can have more than one dependency parses. Multiple parse trees are known as ambiguities. Dependency parsing needs to resolve these ambiguities in order to effectively assign a syntactic structure to a sentence. Dependency parsing can be used in the semantic analysis of a sentence apart from the syntactic structuring.
What is text Summarization?
Text summarization is the process of shortening a long piece of text with its meaning and effect intact. Text summarization intends to create a summary of any given piece of text and outlines the main points of the document. This technique has improved in recent times and is capable of summarizing volumes of text successfully. Text summarization has proved to a blessing since machines can summarize large volumes of text in no time which would otherwise be really time-consuming. There are two types of text summarization: ● Extraction-based summarization ● Abstraction-based summarization
What is NLTK? How is it different from Spacy?
NLTK or Natural Language Toolkit is a series of libraries and programs that are used for symbolic and statistical natural language processing. This toolkit contains some of the most powerful libraries that can work on different ML techniques to break down and understand human language. NLTK is used for Lemmatization, Punctuation, Character count, Tokenization, and Stemming. The difference between NLTK and Spacey are as follows: ● While NLTK has a collection of programs to choose from, Spacey contains only the best suited algorithm for a problem in its toolkit ● NLTK supports a wider range of languages compared to Spacey (Spacey supports only 7 languages) ● While Spacey has an object-oriented library, NLTK has a string processing library ● Spacey can support word vectors while NLTK cannot
What is information extraction?
Information extraction in the context of Natural Language Processing refers to the technique of extracting structured information automatically from unstructured sources to ascribe meaning to it. This can include extracting information regarding attributes of entities, relationship between different entities and more. The various models of information extraction includes: ● Tagger Module ● Relation Extraction Module ● Fact Extraction Module ● Entity Extraction Module ● Sentiment Analysis Module ● Network Graph Module ● Document Classification & Language Modeling Module
What is Bag of Words?
Bag of Words is a commonly used model that depends on word frequencies or occurrences to train a classifier. This model creates an occurrence matrix for documents or sentences irrespective of its grammatical structure or word order.
What is Pragmatic Ambiguity in NLP?
Pragmatic ambiguity refers to those words which have more than one meaning and their use in any sentence can depend entirely on the context. Pragmatic ambiguity can result in multiple interpretations of the same sentence. More often than not, we come across sentences which have words with multiple meanings, making the sentence open to interpretation. This multiple interpretation causes ambiguity and is known as Pragmatic ambiguity in NLP.
What is a Masked Language Model?
Masked language models help learners to understand deep representations in downstream tasks by taking an output from the corrupt input. This model is often used to predict the words to be used in a sentence.
What are the best NLP Tools?
Some of the best NLP tools from open sources are: ● SpaCy ● TextBlob ● Textacy ● Natural language Toolkit ● Retext ● NLP.js ● Stanford NLP ● CogcompNLP
What is POS tagging?
Parts of speech tagging better known as POS tagging refers to the process of identifying specific words in a document and group them as part of speech, based on its context. POS tagging is also known as grammatical tagging since it involves understanding grammatical structures and identifying the respective component. POS tagging is a complicated process since the same word can be different parts of speech depending on the context. The same generic process used for word mapping is quite ineffective for POS tagging because of the same reason.
What is NES?
Name entity recognition is more commonly known as NER is the process of identifying specific entities in a text document which are more informative and have a unique context. These often denote places, people, organizations, and more. Even though it seems like these entities are proper nouns, the NER process is far from identifying just the nouns. In fact, NER involves entity chunking or extraction wherein entities are segmented to categorize them under different predefined classes. This step further helps in extracting information.
Explain the Masked Language Model?
Masked language modelling is the process in which the output is taken from the corrupted input. This model helps the learners to master the deep representations in downstream tasks. You can predict a word from the other words of the sentence using this model.
What is pragmatic analysis in NLP?
Pragmatic Analysis: It deals with outside word knowledge, which means knowledge that is external to the documents and/or queries. Pragmatics analysis that focuses on what was described is reinterpreted by what it actually meant, deriving the various aspects of language that require real-world knowledge.
What is perplexity in NLP?
The word “perplexed” means “puzzled” or “confused”, thus Perplexity in general means the inability to tackle something complicated and a problem that is not specified. Therefore, Perplexity in NLP is a way to determine the extent of uncertainty in predicting some text. In NLP, perplexity is a way of evaluating language models. Perplexity can be high and low; Low perplexity is ethical because the inability to deal with any complicated problem is less while high perplexity is terrible because the failure to deal with a complicated is high.
What is ngram in NLP?
N-gram in NLP is simply a sequence of n words, and we also conclude the sentences which appeared more frequently, for example, let us consider the progression of these three words: ● New York (2 gram) ● The Golden Compass (3 gram) ● She was there in the hotel (4 gram) Now from the above sequence, we can easily conclude that sentence (a) appeared more frequently than the other two sentences, and the last sentence(c) is not seen that often. Now if we assign probability in the occurrence of an n-gram, then it will be advantageous. It would help in making next-word predictions and in spelling error corrections.
Explain differences between AI, Machine Learning and NLP
Why self-attention is awesome?
“In terms of computational complexity, self-attention layers are faster than recurrent layers when the sequence length n is smaller than the representation dimensionality d, which is most often the case with sentence representations used by state-of-the-art models in machine translations, such as word-piece and byte-pair representations.” — from Attention is all you need.
Stop words are said to be useless data for a search engine. Words such as articles, prepositions, etc. are considered as stop words. There are stop words such as was, were, is, am, the, a, an, how, why, and many more. In Natural Language Processing, we eliminate the stop words to understand and analyze the meaning of a sentence. The removal of stop words is one of the most important tasks for search engines. Engineers design the algorithms of search engines in such a way that they ignore the use of stop words. This helps show the relevant search result for a query.
What is Latent Semantic Indexing (LSI)?
Latent semantic indexing is a mathematical technique used to improve the accuracy of the information retrieval process. The design of LSI algorithms allows machines to detect the hidden (latent) correlation between semantics (words). To enhance information understanding, machines generate various concepts that associate with the words of a sentence. The technique used for information understanding is called singular value decomposition. It is generally used to handle static and unstructured data. The matrix obtained for singular value decomposition contains rows for words and columns for documents. This method best suits to identify components and group them according to their types. The main principle behind LSI is that words carry a similar meaning when used in a similar context. Computational LSI models are slow in comparison to other models. However, they are good at contextual awareness that helps improve the analysis and understanding of a text or a document.
What are Regular Expressions?
A regular expression is used to match and tag words. It consists of a series of characters for matching strings. Suppose, if A and B are regular expressions, then the following are true for them: ● If {ɛ} is a regular language, then ɛ is a regular expression for it. ● If A and B are regular expressions, then A + B is also a regular expression within the language {A, B}. ● If A and B are regular expressions, then the concatenation of A and B (A.B) is a regular expression. ● If A is a regular expression, then A* (A occurring multiple times) is also a regular expression.
What are unigrams, bigrams, trigrams, and n-grams in NLP?
When we parse a sentence one word at a time, then it is called a unigram. The sentence parsed two words at a time is a bigram. When the sentence is parsed three words at a time, then it is a trigram. Similarly, n-gram refers to the parsing of n words at a time.
What are the steps involved in solving an NLP problem?
Below are the steps involved in solving an NLP problem:
1. Gather the text from the available dataset or by web scraping 2. Apply stemming and lemmatization for text cleaning 3. Apply feature engineering techniques 4. Embed using word2vec 5. Train the built model using neural networks or other Machine Learning techniques 6. Evaluate the model’s performance 7. Make appropriate changes in the model 8. Deploy the model
There have some various common elements of natural language processing. Those elements are very important for understanding NLP properly, can you please explain the same in details with an example?
There have a lot of components normally using by natural language processing (NLP). Some of the major components are explained below: ● Extraction of Entity: It actually identifying and extracting some critical data from the available information which help to segmentation of provided sentence on identifying each entity. It can help in identifying one human that it’s fictional or real, same kind of reality identification for any organization, events or any geographic location etc. ● The analysis in a syntactic way: it mainly helps for maintaining ordering properly of the available words.
In the case of processing natural language, we normally mentioned one common terminology NLP and binding every language with the same terminology properly. Please explain in details about this NLP terminology with an example?
This is the basic NLP Interview Questions asked in an interview. There have some several factors available in case of explaining natural language processing. Some of the key factors are given below:
● Vectors and Weights: Google Word vectors, length of TF-IDF, varieties documents, word vectors, TF-IDF. ● Structure of Text: Named Entities, tagging of part of speech, identifying the head of the sentence. ● Analysis of sentiment: Know about the features of sentiment, entities available for the sentiment, sentiment common dictionary. ● Classification of Text: Learning supervising, set off a train, set of validation in Dev, Set of define test, a feature of the individual text, LDA. ● Reading of Machine Language: Extraction of the possible entity, linking with an individual entity, DBpedia, some libraries like Pikes or FRED.
Explain briefly about word2vec
Word2Vec embeds words in a lower-dimensional vector space using a shallow neural network. The result is a set of word-vectors where vectors close together in vector space have similar meanings based on context, and word-vectors distant to each other have differing meanings. For example, apple and orange would be close together and apple and gravity would be relatively far. There are two versions of this model based on skip-grams (SG) and continuous-bag-of-words (CBOW).
What are the metrics used to test an NLP model?
Accuracy, Precision, Recall and F1. Accuracy is the usual ratio of the prediction to the desired output. But going just be accuracy is naive considering the complexities involved.
What are some ways we can preprocess text input?
Here are several preprocessing steps that are commonly used for NLP tasks: ● case normalization: we can convert all input to the same case (lowercase or uppercase) as a way of reducing our text to a more canonical form ● punctuation/stop word/white space/special characters removal: if we don’t think these words or characters are relevant, we can remove them to reduce the feature space ● lemmatizing/stemming: we can also reduce words to their inflectional forms (i.e. walks → walk) to further trim our vocabulary ● generalizing irrelevant information: we can replace all numbers with a <NUMBER> token or all names with a <NAME> token.
How does the encoder-decoder structure work for language modelling?
The encoder-decoder structure is a deep learning model architecture responsible for several state of the art solutions, including Machine Translation. The input sequence is passed to the encoder where it is transformed to a fixed-dimensional vector representation using a neural network. The transformed input is then decoded using another neural network. Then, these outputs undergo another transformation and a SoftMax layer. The final output is a vector of probabilities over the vocabularies. Meaningful information is extracted based on these probabilities.
How would you implement an NLP system as a service, and what are some pitfalls you might face in production?
This is less of a NLP question than a question for productionizing machine learning models. There are however certain intricacies to NLP models.
Without diving too much into the productionization aspect, an ideal Machine Learning service will have: ● endpoint(s) that other business systems can use to make inference ● a feedback mechanism for validating model predictions ● a database to store predictions and ground truths from the feedback ● a workflow orchestrator which will (upon some signal) re-train and load the new model for serving based on the records from the database + any prior training data ● some form of model version control to facilitate rollbacks in case of bad deployments ● post-production accuracy and error monitoring
What are attention mechanisms and why do we use them?
This was a follow-up to the encoder-decoder question. Only the output from the last time step is passed to the decoder, resulting in a loss of information learned at previous time steps. This information loss is compounded for longer text sequences with more time steps. Attention mechanisms are a function of the hidden weights at each time step. When we use attention in encoder-decoder networks, the fixed-dimensional vector passed to the decoder becomes a function of all vectors outputted in the intermediary steps. Two commonly used attention mechanisms are additive attention and multiplicative attention. As the names suggest, additive attention is a weighted sum while multiplicative attention is a weighted multiplier of the hidden weights. During the training process, the model also learns weights for the attention mechanisms to recognize the relative importance of each time step.
How can we handle misspellings for text input?
By using word embeddings trained over a large corpus (for instance, an extensive web scrape of billions of words), the model vocabulary would include common misspellings by design. The model can then learn the relationship between misspelled and correctly spelled words to recognize their semantic similarity. We can also preprocess the input to prevent misspellings. Terms not found in the model vocabulary can be mapped to the “closest” vocabulary term using: ● edit distance between strings ● phonetic distance between word pronunciations ● keyword distance to catch common typos
● Exploding gradient(Solved by gradient clipping) ● Dying ReLu — No learning if the activation is 0 (Solved by parametric relu) ● Mean and variance of activations is not 0 and 1.(Partially solved by subtracting around 0.5 from activation. Better explained in fastai videos)
What is the difference between learning latent features using SVD and getting embedding vectors using deep network?
SVD uses linear combination of inputs while a neural network uses nonlinear combination.
What is the information in the hidden and cell state of LSTM?
Hidden stores all the information till that time step and cell state stores particular information that might be needed in the future time step.
When is self-attention not faster than recurrent layers?
When the sequence length is greater than the representation dimensions. This is rare.
What is the benefit of learning rate warm-up?
Learning rate warm-up is a learning rate schedule where you have low (or lower) learning rate at the beginning of training to avoid divergence due to unreliable gradients at the beginning. As the model becomes more stable, the learning rate would increase to speed up convergence.
What’s the difference between hard and soft parameter sharing in multi-task learning?
What’s the difference between BatchNorm and LayerNorm?
BatchNorm computes the mean and variance at each layer for every minibatch whereas LayerNorm computes the mean and variance for every sample for each layer independently.
Hard sharing is where we train for all the task at the same time and update our weights using all the losses whereas soft sharing is where we train for one task at a time.
Batch normalisation allows you to set higher learning rates, increasing speed of training as it reduces the unstability of initial starting weights.
Difference between BatchNorm and LayerNorm?
BatchNorm — Compute the mean and var at each layer for every minibatch LayerNorm — Compute the mean and var for every single sample for each layer independently
Why does the transformer block have LayerNorm instead of BatchNorm?
Looking at the advantages of LayerNorm, it is robust to batch size and works better as it works at the sample level and not batch level.
What changes would you make to your deep learning code if you knew there are errors in your training data?
We can do label smoothening where the smoothening value is based on % error. If any particular class has known error, we can also use class weights to modify the loss.
What are the tricks used in ULMFiT? (Not a great questions but checks the awareness) ● LM tuning with task text ● Weight dropout ● Discriminative learning rates for layers ● Gradual unfreezing of layers ● Slanted triangular learning rate schedule This can be followed up with a question on explaining how they help.
Tell me a language model which doesn’t use dropout
ALBERT v2 — This throws a light on the fact that a lot of assumptions we take for granted are not necessarily true. The regularization effect of parameter sharing in ALBERT is so strong that dropouts are not needed. (ALBERT v1 had dropouts.)
What are the differences between GPT and GPT-2?
● Layer normalization was moved to the input of each sub-block, similar to a residual unit of type “building block” (differently from the original type “bottleneck”, it has batch normalization applied before weight layers). ● An additional layer normalization was added after the final self-attention block. ● A modified initialization was constructed as a function of the model depth. ● The weights of residual layers were initially scaled by a factor of 1/√n where n is the number of residual layers. ● Use larger vocabulary size and context size.
What are the differences between GPT and BERT?
● GPT is not bidirectional and has no concept of masking ● BERT adds next sentence prediction task in training and so it also has a segment embedding
What are the differences between BERT and ALBERT v2?
● Embedding matrix factorisation(helps in reducing no. of parameters) ● No dropout ● Parameter sharing(helps in reducing no. of parameters and regularisation)
How does parameter sharing in ALBERT affect the training and inference time?
No effect. Parameter sharing just decreases the number of parameters.
How would you reduce the inference time of a trained NN model?
● Serve on GPU/TPU/FPGA ● 16 bit quantisation and served on GPU with fp16 support ● Pruning to reduce parameters ● Knowledge distillation (To a smaller transformer model or simple neural network) ● Hierarchical softmax/Adaptive softmax ● You can also cache results as explained here.
Would you use BPE with classical models?
Of course! BPE is a smart tokeniser and it can help us get a smaller vocabulary which can help us find a model with less parameters.
How would you make an arxiv papers search engine?
How would you make a plagiarism detector?
Get top k results with TF-IDF similarity and then rank results with ● semantic encoding + cosine similarity ● a model trained for ranking
This is a trick question. The interviewee can say all things such as using transfer learning and latest models but they need to talk about having a neutral class too otherwise you can have really good accuracy/f1 and still, the model will classify everything into positive or negative. The truth is that a lot of news is neutral and so the training needs to have this class. The interviewee should also talk about how he will create a dataset and his training strategies like the selection of language model, language model fine-tuning and using various datasets for multitask learning.
What is the difference between regular expression and regular grammar?
A regular expression is the representation of natural language in the form of mathematical expressions containing a character sequence. On the other hand, regular grammar is the generator of natural language, defining a set of defined rules and syntax which the strings in the natural language must follow.
Why should we use Batch Normalization?
Once the interviewer has asked you about the fundamentals of deep learning architectures, they would move on to the key topic of improving your deep learning model’s performance. Batch Normalization is one of the techniques used for reducing the training time of our deep learning algorithm. Just like normalizing our input helps improve our logistic regression model, we can normalize the activations of the hidden layers in our deep learning model as well:
How is backpropagation different in RNN compared to ANN?
In Recurrent Neural Networks, we have an additional loop at each node: This loop essentially includes a time component into the network as well. This helps in capturing sequential information from the data, which could not be possible in a generic artificial neural network. This is why the backpropagation in RNN is called Backpropagation through Time, as in backpropagation at each time step.
Which of the following is a challenge when dealing with computer vision problems?
Variations due to geometric changes (like pose, scale, etc), Variations due to photometric factors (like illumination, appearance, etc) and Image occlusion. All the above-mentioned options are challenges in computer vision.
Consider an image with width and height as 100×100. Each pixel in the image can have a color from Grayscale, i.e. values. How much space would this image require for storing?
The answer will be 8x100x100 because 8 bits will be required to represent a number from 0-256
Why do we use convolutions for images rather than just FC layers?
Firstly, convolutions preserve, encode, and actually use the spatial information from the image. If we used only FC layers we would have no relative spatial information. Secondly, Convolutional Neural Networks (CNNs) have a partially built-in translation in-variance, since each convolution kernel acts as it’s own filter/feature detector
What makes CNN’s translation-invariant?
As explained above, each convolution kernel acts as it’s own filter/feature detector. So let’s say you’re doing object detection, it doesn’t matter where in the image the object is since we’re going to apply the convolution in a sliding window fashion across the entire image anyways.
Why do we have max-pooling in classification CNNs?
Max-pooling in a CNN allows you to reduce computation since your feature maps are smaller after the pooling. You don’t lose too much semantic information since you’re taking the maximum activation. There’s also a theory that max-pooling contributes a bit to giving CNN’s more translation in-variance. Check out this great video from Andrew Ng on the benefits of max-pooling.
Why do segmentation CNN’s typically have an encoder-decoder style/structure?
The encoder CNN can basically be thought of as a feature extraction network, while the decoder uses that information to predict the image segments by “decoding” the features and upscaling to the original image size.
What is the significance of Residual Networks?
The main thing that residual connections did was allow for direct feature access from previous layers. This makes information propagation throughout the network much easier. One very interesting paper about this shows how using local skip connections gives the network a type of ensemble multi-path structure, giving features multiple paths to propagate throughout the network.
Training Deep Neural Networks is complicated by the fact that the distribution of each layer’s inputs changes during training, as the parameters of the previous layers change. The idea is then to normalize the inputs of each layer in such a way that they have a mean output activation of zero and a standard deviation of one. This is done for each individual mini-batch at each layer i.e compute the mean and variance of that mini-batch alone, then normalize. This is analogous to how the inputs to networks are standardized. How does this help? We know that normalizing the inputs to a network helps it learn. But a network is just a series of layers, where the output of one layer becomes the input to the next. That means we can think of any layer in a neural network as the first layer of a smaller subsequent network. Thought of as a series of neural networks feeding into each other, we normalize the output of one layer before applying the activation function and then feed it into the following layer (sub-network).
Why would you use many small convolutional kernels such as 3×3 rather than a few large ones?
This is very well explained in the VGGNet paper.
There are 2 reasons: First, you can use several smaller kernels rather than few large ones to get the same receptive field and capture more spatial context, but with the smaller kernels you are using less parameters and computations. Secondly, because with smaller kernels you will be using more filters, you’ll be able to use more activation functions and thus have a more discriminative mapping function being learned by your CNN.
What is Precision?
Precision (also called positive predictive value) is the fraction of relevant instances among the retrieved instances Precision = true positive / (true positive + false positive)
What is Recall?
Recall (also known as sensitivity) is the fraction of relevant instances that have been retrieved over the total amount of relevant instances. Recall = true positive / (true positive + false negative)
Define F1-score.
It is the weighted average of precision and recall. It considers both false positive and false negatives into account. It is used to measure the model’s performance.
What is cost function?
The cost function is a scalar function that Quantifies the error factor of the Neural Network. Lower the cost function better than the Neural network. Eg: MNIST Data set to classify the image, the input image is digit 2 and the Neural network wrongly predicts it to be 3.
List different activation neurons or functions
● Linear Neuron ● Binary Threshold Neuron ● Stochastic Binary Neuron ● Sigmoid Neuron ● Tanh function ● Rectified Linear Unit (ReLU)
Define Learning rate
The learning rate is a hyper-parameter that controls how much we are adjusting the weights of our network with respect to the loss gradient.
What is Momentum (w.r.t NN optimization)?
Momentum lets the optimization algorithm remembers its last step, and adds some proportion of it to the current step. This way, even if the algorithm is stuck in a flat region, or a small local minimum, it can get out and continue towards the true minimum.
What is the difference between Batch Gradient Descent and Stochastic Gradient Descent?
Batch gradient descent computes the gradient using the whole dataset. This is great for convex or relatively smooth error manifolds. In this case, we move somewhat directly towards an optimum solution, either local or global. Additionally, batch gradient descent, given an annealed learning rate, will eventually find the minimum located in its basin of attraction. Stochastic gradient descent (SGD) computes the gradient using a single sample. SGD works well (Not well, I suppose, but better than batch gradient descent) for error manifolds that have lots of local maxima/minima. In this case, the somewhat noisier gradient calculated using the reduced number of samples tends to jerk the model out of local minima into a region that hopefully is more optimal.
Epoch vs Batch vs Iteration.
Epoch: one forward pass and one backward pass of all the training examples Batch: examples processed together in one pass (forward and backward) Iteration: number of training examples / Batch size
What is the vanishing gradient?
As we add more and more hidden layers, backpropagation becomes less and less useful in passing information to the lower layers. In effect, as information is passed back, the gradients begin to vanish and become small relative to the weights of the networks.
What are dropouts?
Dropout is a simple way to prevent a neural network from overfitting. It is the dropping out of some of the units in a neural network. It is similar to the natural reproduction process, where nature produces offsprings by combining distinct genes (dropping out others) rather than strengthening the co-adapting of them.
What is data augmentation? Can you give some examples?
Data augmentation is a technique for synthesizing new data by modifying existing data in such a way that the target is not changed, or it is changed in a known way. Computer vision is one of the fields where data augmentation is very useful. There are many modifications that we can do to images: ● Resize ● Horizontal or vertical flip ● Rotate, Add noise, Deform ● Modify colors Each problem needs a customized data augmentation pipeline. For example, on OCR, doing flips will change the text and won’t be beneficial; however, resizes and small rotations may help.
What are the components of GAN?
● Generator ● Discriminator
What’s the difference between a generative and discriminative model?
A generative model will learn categories of data while a discriminative model will simply learn the distinction between different categories of data. Discriminative models will generally outperform generative models on classification tasks.
What is Linear Filtering?
Linear filtering is a neighborhood operation, which means that the output of a pixel’s value is decided by the weighted sum of the values of the input pixels.
How can you achieve Blurring through Gaussian Filter?
This is the most common technique for blurring or smoothing an image. This filter improves the resulting pixel found at the center and slowly minimizes the effects as pixels move away from the center. This filter can also help in removing noise in an image.
How can you achieve Blurring through Gaussian Filter?
This is the most common technique for blurring or smoothing an image. This filter improves the resulting pixel found at the center and slowly minimizes the effects as pixels move away from the center. This filter can also help in removing noise in an image.
What is Non-Linear Filtering? How it is used?
Linear filtering is easy to use and implement. In some cases, this method is enough to get the necessary output. However, an increase in performance can be obtained through non-linear filtering. Through non-linear filtering, we can have more control and achieve better results when we encounter a more complex computer vision task.
Explain Median Filtering.
The median filter is an example of a non-linear filtering technique. This technique is commonly used for minimizing the noise in an image. It operates by inspecting the image pixel by pixel and taking the place of each pixel’s value with the value of the neighboring pixel median. Some techniques in detecting and matching features are: ● Lucas-Kanade ● Harris ● Shi-Tomasi ● SUSAN (smallest uni value segment assimilating nucleus) ● MSER (maximally stable extremal regions) ● SIFT (scale-invariant feature transform) ● HOG (histogram of oriented gradients) ● FAST (features from accelerated segment test) ● SURF (speeded-up robust features)
Describe the Scale Invariant Feature Transform (SIFT) algorithm
SIFT solves the problem of detecting the corners of an object even if it is scaled. Steps to implement this algorithm: ● Scale-space extrema detection – This step will identify the locations and scales that can still be recognized from different angles or views of the same object in an image. ● Keypoint localization – When possible key points are located, they would be refined to get accurate results. This would result in the elimination of points that are low in contrast or points that have edges that are deficiently localized. ● Orientation assignment – In this step, a consistent orientation is assigned to each key point to attain invariance when the image is being rotated. ● Keypoint matching – In this step, the key points between images are now linked to recognizing their nearest neighbors.
Why Speeded-Up Robust Features (SURF) came into existence?
SURF was introduced to as a speed-up version of SIFT. Though SIFT can detect and describe key points of an object in an image, still this algorithm is slow.
What is Oriented FAST and rotated BRIEF (ORB)?
This algorithm is a great possible substitute for SIFT and SURF, mainly because it performs better in computation and matching. It is a combination of fast key point detector and brief descriptor, which contains a lot of alterations to improve performance. It is also a great alternative in terms of cost because the SIFT and SURF algorithms are patented, which means that you need to buy them for their utilization.
What is image segmentation?
In computer vision, segmentation is the process of extracting pixels in an image that is related. Segmentation algorithms usually take an image and produce a group of contours (the boundary of an object that has well-defined edges in an image) or a mask where a set of related pixels are assigned to a unique color value to identify it. Popular image segmentation techniques: ● Active contours ● Level sets ● Graph-based merging ● Mean Shift ● Texture and intervening contour-based normalized cuts
What is the purpose of semantic segmentation?
The purpose of semantic segmentation is to categorize every pixel of an image to a certain class or label. In semantic segmentation, we can see what is the class of a pixel by simply looking directly at the color, but one downside of this is that we cannot identify if two colored masks belong to a certain object.
Explain instance segmentation.
In semantic segmentation, the only thing that matters to us is the class of each pixel. This would somehow lead to a problem that we cannot identify if that class belongs to the same object or not. Semantic segmentation cannot identify if two objects in an image are separate entities. So to solve this problem, instance segmentation was created. This segmentation can identify two different objects of the same class. For example, if an image has two sheep in it, the sheep will be detected and masked with different colors to differentiate what instance of a class they belong to.
How is panoptic segmentation different from semantic/instance segmentation?
Panoptic segmentation is basically a union of semantic and instance segmentation. In panoptic segmentation, every pixel is classified by a certain class and those pixels that have several instances of a class are also determined. For example, if an image has two cars, these cars will be masked with different colors. These colors represent the same class — car — but point to different instances of a certain class.
Explain the problem of recognition in computer vision.
Recognition is one of the toughest challenges in the concepts in computer vision. Why is recognition hard? For the human eyes, recognizing an object’s features or attributes would be very easy. Humans can recognize multiple objects with very small effort. However, this does not apply to a machine. It would be very hard for a machine to recognize or detect an object because these objects vary. They vary in terms of viewpoints, sizes, or scales. Though these things are still challenges faced by most computer vision systems, they are still making advancements or approaches for solving these daunting tasks.
What is Object Recognition?
Object recognition is used for indicating an object in an image or video. This is a product of machine learning and deep learning algorithms. Object recognition tries to acquire this innate human ability, which is to understand certain features or visual detail of an image.
What is Object Detection and it’s real-life use cases?
Object detection in computer vision refers to the ability of machines to pinpoint the location of an object in an image or video. A lot of companies have been using object detection techniques in their system. They use it for face detection, web images, and security purposes.
Describe Optical Flow, its uses, and assumptions.
Optical flow is the pattern of apparent motion of image objects between two consecutive frames caused by the movement of object or camera. It is a 2D vector field where each vector is a displacement vector showing the movement of points from the first frame to the second Optical flow has many applications in areas like : ● Structure from Motion ● Video Compression ● Video Stabilization Optical flow works on several assumptions: 1. The pixel intensities of an object do not change between consecutive frames. 2. Neighboring pixels have similar motion.
HOG stands for Histograms of Oriented Gradients. HOG is a type of “feature descriptor”. The intent of a feature descriptor is to generalize the object in such a way that the same object (in this case a person) produces as close as possible to the same feature descriptor when viewed under different conditions. This makes the classification task easier.
What’s the difference between valid and same padding in a CNN?
This question has more chances of being a follow-up question to the previous one. Or if you have explained how you used CNNs in a computer vision task, the interviewer might ask this question along with the details of the padding parameters. ● Valid Padding: When we do not use any padding. The resultant matrix after convolution will have dimensions (n – f + 1) X (n – f + 1) ● Same padding: Adding padded elements all around the edges such that the output matrix will have the same dimensions as that of the input matrix
What is BOV: Bag-of-visual-words (BOV)?
BOV also called the bag of key points, is based on vector quantization. Similar to HOG features, BOV features are histograms that count the number of occurrences of certain patterns within a patch of the image.
What is Poselets? Where are poselets used?
Poselets rely on manually added extra keypoints such as “right shoulder”, “left shoulder”, “right knee” and “left knee”. They were originally used for human pose estimation
Explain Textons in context of CNNs
A texton is the minimal building block of vision. The computer vision literature does not give a strict definition for textons, but edge detectors could be one example. One might argue that deep learning techniques with Convolution Neuronal Networks (CNNs) learn textons in the first filters.
What is Markov Random Fields (MRFs)?
MRFs are undirected probabilistic graphical models which are a wide-spread model in computer vision. The overall idea of MRFs is to assign a random variable for each feature and a random variable for each pixel.
Explain the concept of superpixel?
A superpixel is an image patch that is better aligned with intensity edges than a rectangular patch. Superpixels can be extracted with any segmentation algorithm, however, most of them produce highly irregular superpixels, with widely varying sizes and shapes. A more regular space tessellation may be desired.
What is Non-maximum suppression(NMS) and where is it used?
NMS is often used along with edge detection algorithms. The image is scanned along the image gradient direction, and if pixels are not part of the local maxima they are set to zero. It is widely used in object detection algorithms.
Describe the use of Computer Vision in Healthcare.
Computer vision has also been an important part of advances in health-tech. Computer vision algorithms can help automate tasks such as detecting cancerous moles in skin images or finding symptoms in x-ray and MRI scans
Describe the use of Computer Vision in Augmented Reality & Mixed Reality
Computer vision also plays an important role in augmented and mixed reality, the technology that enables computing devices such as smartphones, tablets, and smart glasses to overlay and embed virtual objects on real-world imagery. Using computer vision, AR gear detects objects in the real world in order to determine the locations on a device’s display to place a virtual object. For instance, computer vision algorithms can help AR applications detect planes such as tabletops, walls, and floors, a very important part of establishing depth and dimensions and placing virtual objects in the physical world.
Describe the use of Computer Vision in Facial Recognition
Computer vision also plays an important role in facial recognition applications, the technology that enables computers to match images of people’s faces to their identities. Computer vision algorithms detect facial features in images and compare them with databases of face profiles. Consumer devices use facial recognition to authenticate the identities of their owners. Social media apps use facial recognition to detect and tag users. Law enforcement agencies also rely on facial recognition technology to identify criminals in video feeds.
Describe the use of Computer Vision in Self-Driving Cars
Computer vision enables self-driving cars to make sense of their surroundings. Cameras capture video from different angles around the car and feed it to computer vision software, which then processes the images in real-time to find the extremities of roads, read traffic signs, detect other cars, objects, and pedestrians. The self-driving car can then steer its way on streets and highways, avoid hitting obstacles, and (hopefully) safely drive its passengers to their destination.
Explain famous Computer Vision tasks using a single image example.
Many popular computer vision applications involve trying to recognize things in photographs; for example: Object Classification: What broad category of object is in this photograph? Object Identification: Which type of a given object is in this photograph? Object Verification: Is the object in the photograph? Object Detection: Where are the objects in the photograph? Object Landmark Detection: What are the key points for the object in the photograph? Object Segmentation: What pixels belong to the object in the image? Object Recognition: What objects are in this photograph and where are they?
Explain the distinction between Computer Vision and Image Processing.
Computer vision is distinct from image processing. Image processing is the process of creating a new image from an existing image, typically simplifying or enhancing the content in some way. It is a type of digital signal processing and is not concerned with understanding the content of an image. A given computer vision system may require image processing to be applied to raw input, e.g. pre-processing images. Examples of image processing include: ● Normalizing photometric properties of the image, such as brightness or color. ● Cropping the bounds of the image, such as centering an object in a photograph. ● Removing digital noise from an image, such as digital artifacts from low light levels
Explain business use cases in computer vision.
● Optical character recognition (OCR) ● Machine inspection ● Retail (e.g. automated checkouts) ● 3D model building (photogrammetry) ● Medical imaging ● Automotive safety ● Match move (e.g. merging CGI with live actors in movies) ● Motion capture (mocap) ● Surveillance ● Fingerprint recognition and biometrics
One of the most basic Deep Learning models is a Boltzmann Machine, resembling a simplified version of the Multi-Layer Perceptron. This model features a visible input layer and a hidden layer — just a two-layer neural net that makes stochastic decisions as to whether a neuron should be on or off. Nodes are connected across layers, but no two nodes of the same layer are connected.
What Is the Role of Activation Functions in a Neural Network?
At the most basic level, an activation function decides whether a neuron should be fired or not. It accepts the weighted sum of the inputs and bias as input to any activation function. Step function, Sigmoid, ReLU, Tanh, and Softmax are examples of activation functions.
What Is the Difference Between a Feedforward Neural Network and Recurrent Neural Network?
A Feedforward Neural Network signals travel in one direction from input to output. There are no feedback loops; the network considers only the current input. It cannot memorize previous inputs (e.g., CNN).
What Are the Applications of a Recurrent Neural Network (RNN)?
The RNN can be used for sentiment analysis, text mining, and image captioning. Recurrent Neural Networks can also address time series problems such as predicting the prices of stocks in a month or quarter.
What Are the Softmax and ReLU Functions?
Softmax is an activation function that generates the output between zero and one. It divides each output, such that the total sum of the outputs is equal to one. Softmax is often used for output layers.
Overfitting is a situation that occurs when a model learns the training set too well, taking up random fluctuations in the training data as concepts. These impact the model’s ability to generalize and don’t apply to new data. When a model is given the training data, it shows 100 percent accuracy—technically a slight loss. But, when we use the test data, there may be an error and low efficiency. This condition is known as overfitting. There are multiple ways of avoiding overfitting, such as: ● Regularization. It involves a cost term for the features involved with the objective function ● Making a simple model. With lesser variables and parameters, the variance can be reduced ● Cross-validation methods like k-folds can also be used ● If some model parameters are likely to cause overfitting, techniques for regularization like LASSO can be used that penalize these parameters
What is meant by ‘Training set’ and ‘Test Set’?
We split the given data set into two different sections namely, ‘Training set’ and ‘Test Set’. ‘Training set’ is the portion of the dataset used to train the model. ‘Testing set’ is the portion of the dataset used to test the trained model.
How Do You Handle Missing or Corrupted Data in a Dataset?
One of the easiest ways to handle missing or corrupted data is to drop those rows or columns or replace them entirely with some other value. There are two useful methods in Pandas: ● IsNull() and dropna() will help to find the columns/rows with missing data and drop them ● Fillna() will replace the wrong values with a placeholder value
How Do You Design an Email Spam Filter?
Building a spam filter involves the following process:
● The email spam filter will be fed with thousands of emails ● Each of these emails already has a label: ‘spam’ or ‘not spam.’ ● The supervised machine learning algorithm will then determine which type of emails are being marked as spam based on spam words like the lottery, free offer, no money, full refund, etc. ● The next time an email is about to hit your inbox, the spam filter will use statistical analysis and algorithms like Decision Trees and SVM to determine how likely the email is spam ● If the likelihood is high, it will label it as spam, and the email won’t hit your inbox ● Based on the accuracy of each model, we will use the algorithm with the highest accuracy after testing all the models
Explain bagging.
Bagging, or Bootstrap Aggregating, is an ensemble method in which the dataset is first divided into multiple subsets through resampling. Then, each subset is used to train a model, and the final predictions are made through voting or averaging the component models. Bagging is performed in parallel.
What is the ROC Curve and what is AUC (a.k.a. AUROC)?
The ROC (receiver operating characteristic) the performance plot for binary classifiers of True Positive Rate (y-axis) vs. False Positive Rate (xaxis). AUC is the area under the ROC curve, and it’s a common performance metric for evaluating binary classification models. It’s equivalent to the expected probability that a uniformly drawn random positive is ranked before a uniformly drawn random negative.
Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. As such, the procedure is often called k-fold cross-validation. When a specific value for k is chosen, it may be used in place of k in the reference to the model, such as k=10 becoming 10-fold cross-validation. Mainly used in backgrounds where the objective is forecast, and one wants to estimate how accurately a model will accomplish in practice.
Cross-validation is primarily used in applied machine learning to estimate the skill of a machine learning model on unseen data. That is, to use a limited sample in order to estimate how the model is expected to perform in general when used to make predictions on data not used during the training of the model.
It is a popular method because it is simple to understand and because it generally results in a less biased or less optimistic estimate of the model skill than other methods, such as a simple train/test split.
The general procedure is as follows: 1. Shuffle the dataset randomly. 2. Split the dataset into k groups 3. For each unique group: a. Take the group as a hold out or test data set b. Take the remaining groups as a training data set c. Fit a model on the training set and evaluate it on the test set d. Retain the evaluation score and discard the model 4. Summarize the skill of the model using the sample of model evaluation scores
What are 3 data preprocessing techniques to handle outliers?
1. Winsorize (cap at threshold). 2. Transform to reduce skew (using Box-Cox or similar). 3. Remove outliers if you’re certain they are anomalies or measurement errors.
How much data should you allocate for your training, validation, and test sets?
You have to find a balance, and there’s no right answer for every problem. If your test set is too small, you’ll have an unreliable estimation of model performance (performance statistic will have high variance). If your training set is too small, your actual model parameters will have a high variance. A good rule of thumb is to use an 80/20 train/test split. Then, your train set can be further split into train/validation or into partitions for cross-validation.
What Is a False Positive and False Negative and How Are They Significant?
False positives are those cases which wrongly get classified as True but are False. False negatives are those cases which wrongly get classified as False but are True. In the term ‘False Positive’, the word ‘Positive’ refers to the ‘Yes’ row of the predicted value in the confusion matrix. The complete term indicates that the system has predicted it as a positive, but the actual value is negative.
What’s a Fourier transform?
A Fourier transform is a generic method to decompose generic functions into a superposition of symmetric functions. Or as this more intuitive tutorial puts it, given a smoothie, it’s how we find the recipe. The Fourier transform finds the set of cycle speeds, amplitudes, and phases to match any time signal. A Fourier transform converts a signal from time to frequency domain — it’s a very common way to extract features from audio signals or other time series such as sensor data.
What are the most promising areas of machine learning research right now?
Machine learning is just one component of a larger field called artificial intelligence (AI). AI researchers have done an excellent job at describing the fundamental problems they must solve to achieve intelligent behavior; these problems fall into four general categories: representation, reasoning, learning, and search.
Basically, all of AI research can be classified under these headings; for example, language understanding is a special case of representation (natural language), planning is a special case of reasoning (analogical logical inferences), learning to play chess is a special case of learning (policy search in the game tree), and table lookup is a special case of search (symbol-table lookups). We will focus on two: representation and search.
What follows are our ten favorite problems/areas for the next decade or so. Each one has been researched quite heavily already, but we think that there are no silver bullets yet discovered nor are there any obvious candidates lurking in the wings waiting to take over. Each area has a different flavor to it; all have something to offer the machine learning community, and we believe that many will find fertile ground for their own investigations.
Machine learning methods are useful on large problems, which is becoming increasingly important as applications such as speech recognition are moving into real-world situations outside the lab (e.g., using voice commands while driving). Solution: This is a difficult one because there are many possible solutions to this problem; all will require advances in both theoretical and experimental techniques but we do not know what they are yet. A better understanding of why certain learning algorithms work well on some types of problems but not others may provide insights into how to scale them up. Some examples of the types of problems we would like to tackle include: (i) learning from large databases, (ii) learning in multiple domains, and (iii) learning task-specific knowledge.
Artificial intelligence methods have been used to solve combinatorial problems such as chess playing and problem-solving; these are problems that can be represented as a search tree using nodes representing possible moves for each player. These methods work well on small problems but often fail when applied to larger real-world problems because there are too many options in the search trees that must be explored. For example, consider a game where there are 100 moves per second for each player with 10^100 different games possible over a 40 year lifetime. Solving the AI problem amounts to finding a winning strategy. This is much different from the type of problems we are used to solving which normally fit in memory and where the number of potential options can be kept manageable. Solution: We need better methods than those currently available for searching through very large trees; these could involve ideas from machine learning, such as neural networks or evolutionary algorithms.
Searching for solutions to a problem among all possible alternatives is an important capability but one that has not been researched nearly enough due to its complexity. A brute-force search would seem to require enumerating all alternatives, which is impossible even on extremely simple problems, whereas other approaches seem so specialized that they have little value outside their specific domain (and sometimes not even there). In contrast, machine learning methods can be applied to virtually any problem where the solution space is finite (e.g., finding a path through a graph or board games like chess).
The brute-force approach of enumerating all possible combinations has been successfully applied to optimization problems where only a few desirable solutions are available, but there are many applications that require solving very large problems with thousands or millions of potential solutions. Examples include the Traveling Salesman Problem and scheduling tasks for an airline crew using dozens of variables (e.g., number of passengers flying, weight, the distance between origin and destination cities), a task which becomes more difficult because it must deal with occasional breakdowns in equipment. Any feasible algorithm will require shortcuts that often involve approximations or heuristics. Source.
What is the main purpose of using PCA on a dataset, and what are some examples of its application?
PCA is short for Principal Component Analysis, and it’s a technique used to reduce the dimensionality of a dataset. In other words, it helps you to find the important Variables in a dataset and get rid of the noise. PCA is used in a variety of fields, from image recognition to facial recognition to machine learning.
PCA has a few main applications: – Reducing the number of features in a dataset – Finding relationships between features – Identifying clusters in data – Visualizing data
Let’s take a look at an example. Say you have a dataset with 1000 features (variables). PCA can help you reduce that down to, say, 10 features that explain the majority of variance in the data. This is helpful because it means you can build a model with far fewer features, which makes it simpler and faster. In addition, PCA can help you to find relationships between features and identify clusters in data. All of this can be extremely helpful in understanding and using your data.
PCA is an important tool in Machine Learning, and has a number of applications. The main purpose of PCA is to reduce the dimensionality of a dataset, while still retaining as much information as possible. This can be useful when dealing with very large datasets, as it can make training and testing faster and more efficient. PCA is also often used for data visualization, as it can help to create clear and concise visualizations of high-dimensional data. Finally, PCA can be used for feature selection, as it can help to identify the most important features in a dataset. PCA is a powerful tool that can be applied in many different ways, and is an essential part of any Machine Learning workflow.
What are subservient sounding male names suitable for an automated assistant?
Artificial intelligence is increasingly becoming a staple in our lives, with everything from our homes to our workplaces being automated to some degree. And as AI becomes more ubiquitous, we are starting to see a trend of subservient-sounding names being given to male automated assistants. This is likely due to a combination of factors, including the fact that women are still primarily seen as domestic servants and the fact that many people find it easier to relate to a male voice. Whatever the reason, it seems that subservient-sounding names are here to stay when it comes to male AI. So if you’re looking for a name for your new automated assistant, here are some subservient-sounding male names to choose from:
– Jasper: A popular name meaning “treasurer” or “bringer of riches.” – Custer: A name derived from the Latin word for “servant.” – Luther: A Germanic name meaning “army of warriors.” – Benson: A name of English origin meaning “son of Ben.” – Wilfred: A name of Germanic origin meaning “desires peace.”
In recent years, there has been an increasing trend of using subservient sounding male names for automated assistants. Artificial intelligence is becoming more prevalent in our everyday lives, and automation is slowly but surely taking over many routine tasks. As such, it’s no surprise that we’re seeing a name trend emerge that reflects our growing dependence on these technologies. So what are some suitable names for an automated assistant? How about “Robo-Bob”? Or “Mecha-Mike”? Perhaps even “Cyber-Steve”? Whatever you choose, just be sure to pick a name that sounds suitably subservient! After all, your automated assistant should reflect your growing dependency on technology… and not your growing dominance over it!
How do you calculate user churn rate?
Churn rate is a metric that measures the percentage of users who leave or discontinue using a service within a given time period. The churn rate is an important metric for businesses to track because it can help them identify areas where their product or service is losing users. There are many ways to calculate the churn rate, but one of the most popular methods is to use machine learning or artificial intelligence. Artificial intelligence can help identify patterns in user behavior that may indicate that someone is about to leave the service. By tracking these patterns, businesses can be proactive in addressing user needs and reducing the chances of losing them. In addition, automation can also help reduce the churn rate by making it easier for users to stay with the service. Automation can handle tasks like customer support and billing, freeing up users’ time and making it less likely that they will discontinue their subscription. By using machine learning and artificial intelligence, businesses can more accurately predict and prevent user churn.
There are a few different ways to calculate the user churn rate using artificial intelligence. One way is to use a technique called Artificial Neural Networks. This involves training a computer to recognize patterns in data. Once the computer has learned to recognize these patterns, it can then make predictions about future data. Another way to calculate the user churn rate is to use a technique called Support Vector Machines. This approach uses algorithms to find the boundaries between different groups of data. Once these boundaries have been found, the algorithm can then make predictions about new data points. Finally, there is a technique called Bayesian inference. This approach uses probability theory to make predictions about future events. By using these three techniques, it is possible to calculate the user churn rate with a high degree of accuracy.
As a machine learning researcher, what are the recent trends in the field that you don’t like?
Folks with no educational background taking a MOOC or two in deep learning, entering the field, and skipping over basic concepts in machine learning–specificity/sensitivity, the difference between supervised and unsupervised learning, linear regression, ensembles, proper design of a study/test, probability distributions… With enough MOOCs, you can sound like you know what you are doing, but as soon as something goes wrong or changes slightly, there’s no knowledge about how to fix it. Big problem in employment, particularly when hiring a first machine learning engineer/data scientist.. Source: Colleen Farrelly
What is the future of deep learning for medical image segmentation?
With rapid developments of artificial intelligence (AI) technology, the use of AI technology to mine clinical data has become a major trend in medical industry. Utilizing advanced AI algorithms for medical image analysis, one of the critical parts of clinical diagnosis and decision-making, has become an active research area both in industry and academia. Recent applications of deep leaning in medical image analysis involve various computer vision-related tasks such as classification, detection, segmentation, and registration. Among them, classification, detection, and segmentation are fundamental and the most widely used tasks that can be done with Scale but the rest of the more demanding methods require a more sophisticated platform for example Tasq.
Although there exist a number of reviews on deep learning methods on medical image analysis, most of them emphasize either on general deep learning techniques or on specific clinical applications. The most comprehensive review paper is the work of Litjens et al. published in 2017. Deep learning is such a quickly evolving research field; numerous state-of-the-art works have been proposed since then.
AI Technologies in Medical Image Analysis
Different medical imaging modalities have their unique characteristics and different responses to human body structure and organ tissue and can be used in different clinical purposes. The commonly used image modalities for diagnostic analysis in clinic include projection imaging (such as X-ray imaging), computed tomography (CT), ultrasound imaging, and magnetic resonance imaging (MRI). MRI sequences include T1, T1-w, T2, T2-w, diffusion-weighted imaging (DWI), apparent diffusion coefficient (ADC), and fluid attenuation inversion recovery (FLAIR). Figure 1 demonstrates a few examples of medical image modalities and their corresponding clinical applications.
Image Classification for Medical Image Analysis
As a fundamental task in computer vision, image classification plays an essential role in computer-aided diagnosis. A straightforward use of image classification for medical image analysis is to classify an input image or a series of images as either containing one (or a few) of predefined diseases or free of diseases (i.e., healthy case). Typical clinical applications of image classification tasks include skin disease identification in dermatology, eye disease recognition in ophthalmology (such as diabetic retinopathy, glaucoma, and corneal diseases). Classification of pathological images for various cancers such as breast cancer and brain cancer also belongs to this area.
Convolutional neural network (CNN) is the dominant classification framework for image analysis. With the development of deep learning, the framework of CNN has continuously improved. AlexNet was a pioneer convolutional neural network, which was composed of repeated convolutions, each followed by ReLU and max pooling operation with stride for downsampling. The proposed VGGNet used convolution kernels and maximum pooling to simplify the structure of AlexNet and showed improved performance by simply increasing the number and depth of the network. Via combining and stacking , and convolution kernels and pooling, the inception network and its variants increased the width and the adaptability of the network. ResNet and DenseNet both used skip connections to relieve the gradient vanishing. SENet proposed a squeeze-and-excitation module which enabled the model to pay more attention to the most informative channel features. The family of EfficientNet applied AUTOML and a compound scaling method to uniformly scale the width, depth, and resolution of the network in a principled way, resulting in improved accuracy and efficiency. Source: Kelly Holland
Today, a lot of AI works are using GPU. Why use GPU if the one for processing things is the CPU?
GPUs also process things. It’s just that they’re better and faster at “specific” things.
The main stuff a GPU is “awesome” at, exactly because it is designed to be specific with those: Matrix maths. The sorts of calculation used when converting a bunch of 3d points (XYZ values) into an approximation of how such a shape would look from a camera. I.e. rendering a 2d picture from a 3d object – exactly why a GPU is made in the first place: https://www.3dgep.com/3d-math-primer-for-game-programmers-matrices/
The sorts of calculations used in current “AI” ? Guess what? Matrix maths:
Formally, the smallest number of data points needed for successfully learning a classification rule using a machine learning (ML) algorithm is called the sample complexity of the algorithm. Now, you might wonder why sample complexity is such a big deal. It’s because sample complexity is to ML algorithms what computational complexity is to any algorithm. It measures the minimum amount of resource (i.e. the data) that is required to achieve the desired goal.
There are several interesting answers to the question of sample complexity, that arise from various assumptions on the learner. In what follows, I will give the answer under some popular assumptions/scenarios.
Scenario 1: Perfect Learning
In our first scenario, we consider the problem of learning the correct hypothesis (classification rule) amongst a set of plausible hypotheses. The data is sampled independently from an unknown probability distribution.
It turns out that under no further assumptions on the data-generating probability distribution, the problem is impossible. In other words, there is no algorithm that can learn the correct classification rule perfectly from any finite amount of data. This result is called the No Free Lunch Theorem in machine learning. I’ve discuss this result in more detail here.
Scenario 2: Probably Approximately Correct (PAC) Learning
For the second scenario, we consider the problem of learning the correct hypothesis approximately, with high probability. That is, our algorithm may fail to identify even an approximately correct hypothesis with some small probability. This relaxation allows us to give a slightly more useful answer to the question.
The answer to this question is of the order of the VC-dimension of the hypothesis class. More precisely, if we want the algorithm to be approximately correct with an error of at most ϵϵ with a probability of at least 1−δ1−δ, then we need a minimum of dϵlog(1ϵδ)dϵlog(1ϵδ), where dd is the VC-dimension of the hypothesis class. Note that dd can be infinite for certain hypothesis classes. In that case, it is not possible to succeed in the learning task even approximately, even with high probability. On the other hand, if dd is finite, we say that the hypothesis class is (ϵ,δ)−(ϵ,δ)−PAC learnable. (I explain PAC-learnability in more detail in this answer.)
Scenario 3: Learning with a Teacher
In the previous two scenarios, we assume that the data that is presented to the learner is randomly sampled from an unknown probability distribution. For this scenario, we do away with the randomness. Instead, we assume that the learner is presented with a carefully chosen set of training data points that are picked by a benevolent teacher. (By benevolent teacher, I mean that the teacher tries to make the learner guess the correct hypothesis with the fewest number of data points.)
In this case, the answer to the question is the teaching dimension. It is interesting to note that there is no straightforward relation between the teaching dimension and VC-dimension of a hypothesis class. They can be arbitrarily far from each other. (If you’re curious to know the relation between the two, here is a nice paper.)
In addition to these, there are other notions of “dimension” that characterize the sample complexity of a learning task under different scenarios. For example, there is the Littlestone dimension for online learning and Natarajan dimension for multi-class learning. Intuitively, these dimensions capture the inherent hardness of a machine learning task. The harder the task, the higher the dimension and the corresponding sample complexity.
To those of you seeking for exact numbers, here’s a note I added in the comments section: I wish I could add some useful empirical results, but the sample complexity bounds obtained by the PAC-learning approach are really loose to the point of being useless in case of most state-of-the-art ML algorithms like deep learning. So, the results I presented are basically a theoretical curiosity at this point. However, this might change in the near future as lots of researchers are working on strengthening this framework.
How can a machine learning algorithm learn from small datasets?
As mentioned in the other answer, this can be understood using the concept of bias-variance tradeoff.
For any machine learning model, want to find a function that approximately fits your data. So, you essentially define the following:
Class of functions : Instead of searching in the space of all possible functions, you restrict the space of functions that the algorithm searches over. For example, a linear classifier will search among all possible lines, but will not consider more complex curves.
Loss function : This is used to compare two functions from the above class of functions. For instance, in SVM, you would prefer line 1 to line 2 if line 1 has a larger margin than line 2.
Now, the simpler your class of functions is, the smaller the amount of data required. To get some intuition for this, think about a regression problem that has three features. So, a linear function class will have the following form:
y=a0+a1x1+a2x2+a3x3y=a0+a1x1+a2x2+a3x3
Every point (p, q, r, s) in the 4-dimensional space corresponds to a function of the above form, namely y=p+qx1+rx2+sx3y=p+qx1+rx2+sx3. So, you need to find one point in that 4D space that fits your data well.
Now, if instead of the class of linear functions, you chose quadratic functions, your functions would be of the following form:
So now, you have to search for the best point in a 10D space! Therefore, you need more data to distinguish these larger number of points from each other.
With that intuition, we can say that to learn from small amount of data, you want to define a small enough function class.
Note: While in the above example, we simply look at the no. of parameters to get a sense of complexity of the function class, in general, more parameters does not necessarily mean more complexity [for instance, if a lot of the parameters are strongly correlated].
I like this book very much. When in doubt I look there, and usually find what I am looking for, or I find references on where to go to study the problem more in depth. I like that it tries to show how various topics are interrelated, and to give general architectures for general problems … It is a jump in quality with respect to the AI books that were previously available. — Prof. Giorgio Ingargiola (Temple).
Really excellent on the whole and it makes teaching AI a lot easier. — Prof. Ram Nevatia (USC).
It is an impressive book, which begins just the way I want to teach, with a discussion of agents, and ties all the topics together in a beautiful way. — Prof. George Bekey (USC). Buy it now
“Written by three experts in the field, Deep Learning is the only comprehensive book on the subject.” —Elon Musk, cochair of OpenAI; cofounder and CEO of Tesla and SpaceX.
“If you want to know here deep learning came from, what it is good for, and where it is going, read this book.” —Geoffrey Hinton FRS, Professor, University of Toronto, Research Scientist at Google. Buy it
“An exceptional resource to study Machine Learning. You will find clear-minded, intuitive explanations, and a wealth of practical tips.” —François Chollet, Author of Keras, author of Deep Learning with Python.
“This book is a great introduction to the theory and practice of solving problems with neural networks; I recommend it to anyone interested in learning about practical ML.” — Peter Warden, Mobile Lead for TensorFlow. Buy it.
When should you not normalize data in machine learning?
First things first, I don’t think there are many questions of the form “Is it a good practice to always X in machine learning” where the answer is going to be definitive. Always? Always always? Across parametric, non-parametric, Bayesian, Monte Carlo, social science, purely mathematic, and million feature models? That’d be nice, wouldn’t it! Anyway feel free to check out this interactive demo from deepchecks.
Concretely though, here are a few ways in which: it just depends.
Some times when normalizing is good:
1) Several algorithms, in particular SVMs come to mind, can sometimes converge far faster on normalized data (although why, precisely, I can’t recall).
2) When your model is sensitive to magnitude, and the units of two different features are different, and arbitrary. This is like the case you suggest, in which something gets more influence than it should.
But of course — not all algorithms are sensitive to magnitude in the way you suggest. Linear regression coefficients will be identical if you do, or don’t, scale your data, because it’s looking at proportional relationships between them.
Some times when normalizing is bad:
1) When you want to interpret your coefficients, and they don’t normalize well. Regression on something like dollars gives you a meaningful outcome. Regression on proportion-of-maximum-dollars-in-sample might not.
2) When, in fact, the units on your features are meaningful, and distance does make a difference! Back to SVMs — if you’re trying to find a max-margin classifier, then the units that go into that ‘max’ matter. Scaling features for clustering algorithms can substantially change the outcome. Imagine four clusters around the origin, each one in a different quadrant, all nicely scaled. Now, imagine the y-axis being stretched to ten times the length of the the x-axis. instead of four little quadrant-clusters, you’re going to get the long squashed baguette of data chopped into four pieces along its length! (And, the important part is, you might prefer either of these!)
In I’m sure unsatisfying summary, the most general answer is that you need to ask yourself seriously what makes sense with the data, and model, you’re using. Source: ABC of Data Science and ML
How do you prepare data for XGBoost?
Data preparation is a critical step in the data science process, and it is especially important when working with XGBoost. XGBoost is a powerful machine learning algorithm that can provide accurate predictions on data sets of all sizes. However, in order to get the most out of XGBoost, it is important to prepare the data in a way that is conducive to machine learning. This means ensuring that the data is clean, feature engineering has been performed, and that the data is in a format that can be easily consumed by the algorithm. By taking the time to prepare the data properly, data scientists can significantly improve the performance of their machine learning models.
When preparing the dataset for your machine learning model, you should use one-hot encoding on what type of data?
In machine learning and data science, one-hot encoding is a process by which categorical data is converted into a format that is suitable for use with machine learning algorithms. The categorical data is first grouped by type, and then a binary value is assigned to each group. This binary value corresponds to the group’s position in the encoding scheme. For example, if there are three groups, the first group would be assigned a value of ‘0’, the second group would be assigned a value of ‘1’, and the third group would be assigned a value of ‘2’. One-hot encoding is often used when working with categorical data, as it can help to improve the performance of machine learning models. In addition, one-hot encoding can also make it easier to visualize the relationship between different categories.
In machine learning and data science, one-hot encoding is a method used to convert categorical features into numerical features. This is often necessary when working with machine learning models, as many models can only accept numerical input. However, one-hot encoding is not without its problems. The most significant issue is the potential for increased dimensionality – if a dataset has too many features, it can be difficult for the model to learn from the data. In addition, one-hot encoding can create sparse datasets, which can also be difficult for some machine learning models to handle. Despite these issues, one-hot encoding remains a popular method for preparing data for machine learning models.
A retail company wants to start personalizing product recommendations to visitors of their website. They have historical data of what products the users have purchased and want to implement the system for new users, prior to them purchasing a product. What’s one way of phrasing a machine learning problem for this situation?
For this retail company, a machine learning problem could be phrased as a prediction problem. The goal would be to build a model that can take in data about a new user (such as demographic information and web browsing history) and predict which products they are likely to purchase. This would allow the company to give each new user personalized product recommendations, increasing the chances of making a sale. Data science techniques such as feature engineering and model selection would be used to build the best possible prediction model. By phrasing the machine learning problem in this way, the retail company can make the most of their historical data and improve the user experience on their website.
There are many ways to frame a machine learning problem for a retail company that wants to start personalizing product recommendations to visitors of their website. One way is to focus on prediction: using historical data of what products users have purchased, can we predict which products new users will be interested in? This is a task that machine learning is well suited for, and with enough data, we can build a model that accurately predicts product interests for new users. Another way to frame the problem is in terms of classification: given data on past purchases, can we classify new users into groups based on their product interests? This would allow the retail company to more effectively target personalization efforts. There are many other ways to frame the machine learning problem, depending on the specific goals of the company. But no matter how it’s framed, machine learning can be a powerful tool for personalizing product recommendations.
A data scientist is trying to determine how a model is doing based on training evaluation. The train accuracy plateaus out at around 70% and the validation accuracy is 67%. How should the data scientist interpret these results?
When working with machine learning models, it is important to evaluate how well the model is performing. This can be done by looking at the train and validation accuracy. In this case, the train accuracy has plateaued at around 70% and the validation accuracy is 67%. There are a few possible explanations for this. One possibility is that the model is overfitting on the training data. This means that the model is able to accurately predict labels for the training data, but it is not as effective at generalizing to new data. Another possibility is that there is a difference in the distribution of the training and validation data. If the validation data is different from the training data, then it makes sense that the model would have a lower accuracy on the validation data. To determine which of these explanations is most likely, the data scientist should look at the confusion matrix and compare the results of the training and validation sets. If there are large differences between the two sets, then it is likely that either overfitting or a difference in distributions is to blame. However, if there isn’t a large difference between the sets, then it’s possible that 70% is simply the best accuracy that can be achieved given the data.
One important consideration in machine learning is how well a model is performing. This can be determined in a number of ways, but one common method is to split the data into a training set and a validation set. The model is then trained on the training data and evaluated on the validation data. If the model is performing well, we would expect to see a similar accuracy on both the training and validation sets. However, in this case the training accuracy plateaus out at around 70% while the validation accuracy is only 67%. This could be indicative of overfitting, where the model has fit the training data too closely and does not generalize well to new data. In this case, the data scientist should look for ways to improve the model so that it performs better on the validation set.
When updating your weights using the loss function, what dictates how much change the weights should have?
In machine learning and data science, the learning rate is a parameter that dictates how much change the weights should have when updating them using the loss function. The learning rate is typically a small value between 0 and 1. A higher learning rate means that the weights are updated more quickly, which can lead to faster convergence but can also lead to instability. A lower learning rate means that the weights are updated more slowly, which can lead to slower convergence but can also help avoid overfitting. The optimal learning rate for a given problem can be found through trial and error. The bias term is another parameter that can affect the weight updates. The bias term is used to prevent overfitting by penalizing models that make too many assumptions about the data. The initial weights are also important, as they determine where the model starts on the optimization landscape. The batch size is another important parameter, as it defines how many training examples are used in each iteration of weight updates. A larger batch size can lead to faster convergence, but a smaller batch size can help avoid overfitting. Finding the optimal values for all of these parameters can be a challenge, but doing so is essential for training high-quality machine learning models.
An ad tech company is using an XGBoost model to classify its clickstream data. The company’s Data Scientist is asked to explain how the model works to a group of non-technical colleagues. What is a simple explanation the Data Scientist can provide?
Machine learning is a form of artificial intelligence that allows computers to learn from data, without being explicitly programmed. machine learning is a powerful tool for solving complex problems, and XGBoost is a popular machine learning algorithm. machine learning algorithms like XGBoost work by building a model based on training data, and then using that model to make predictions on new data. In the case of the ad tech company, the Data Scientist has used XGBoost to build a model that can classify clickstream data. This means that the model can look at new data and predict which category it belongs to. For example, the model might be able to predict whether a user is likely to click on an ad or not. The Data Scientist can explain how the model works by showing how it makes predictions on new data.
Machine learning is a method of teaching computers to learn from data, without being explicitly programmed. machine learning is a subset of artificial intelligence (AI). The XGBoost algorithm is a machine learning technique used to create models that predict outcomes by learning from past data. XGBoost is an implementation of gradient boosting, which is a machine learning technique for creating models that make predictions by combining the predictions of multiple individual models. The XGBoost algorithm is highly effective and is used by many organizations, including ad tech companies, to classify their data. The Data Scientist can explain how the XGBoost model works by providing a simple explanation of machine learning and how the XGBoost algorithm works. machine learning is a method of teaching computers to learn from data, without being explicitly programmed.
An ML Engineer at a real estate startup wants to use a new quantitative feature for an existing ML model that predicts housing prices. Before adding the feature to the cleaned dataset, the Engineer wants to visualize the feature in order to check for outliers and overall distribution and skewness of the feature. What visualization technique should the ML Engineer use?
The machine learning engineer at the real estate startup should use a visualization technique in order to check for outliers and overall distribution and skewness of the new quantitative feature. There are many different visualization techniques that could be used for this purpose, but two of the most effective are histograms and scatterplots. A histogram can show the distribution of values for the new feature, while a scatterplot can help to identify any outliers. By visualizing the data, the engineer will be able to ensure that the new feature is of high quality and will not impact the performance of the machine learning model.
When updating your weights using the loss function, what dictates how much change the weights should have?
In machine learning and data science, the learning rate is a parameter that dictates how much change the weights should have when updating them using the loss function. The learning rate is typically a small value between 0 and 1. A higher learning rate means that the weights are updated more quickly, which can lead to faster convergence but can also lead to instability. A lower learning rate means that the weights are updated more slowly, which can lead to slower convergence but can also help avoid overfitting. The optimal learning rate for a given problem can be found through trial and error. The bias term is another parameter that can affect the weight updates. The bias term is used to prevent overfitting by penalizing models that make too many assumptions about the data. The initial weights are also important, as they determine where the model starts on the optimization landscape. The batch size is another important parameter, as it defines how many training examples are used in each iteration of weight updates. A larger batch size can lead to faster convergence, but a smaller batch size can help avoid overfitting. Finding the optimal values for all of these parameters can be a challenge, but doing so is essential for training high-quality machine learning models.
The loss function is a key component of machine learning algorithms, as it determines how well the model is performing. When updating the weights using the loss function, the learning rate dictates how much change the weights should have. The learning rate is a hyperparameter that can be tuned to find the optimal value for the model. The bias term is another important factor that can influence the weights. The initial weights can also play a role in how much change the weights should have. The batch size is another important factor to consider when updating the weights using the loss function.
A data scientist wants to clean and merge two small datasets stored in CSV format. What tool can they use to merge these datasets together?
As a data scientist, you often need to work with multiple datasets in order to glean insights that would be hidden in any one dataset on its own. In order to do this, you need to be able to clean and merge datasets quickly and efficiently. One tool that can help you with this task is Pandas. Pandas is a Python library that is specifically designed for data analysis. It offers a wide range of features that make it well-suited for merging datasets, including the ability to read in CSV format, clean data, and merge datasets with ease. In addition, Pandas integrates well with other machine learning libraries such as Scikit-learn, making it a valuable tool for data scientists.
As a data scientist, one of the most important skills is knowing how to clean and merge datasets. This can be a tedious and time-consuming process, but it is essential for machine learning and data science projects. There are several tools that data scientists can use to merge datasets, but one of the most popular options is pandas. Pandas is a Python library that offers a wide range of functions for data manipulation and analysis. Additionally, pandas has built-in support for reading and writing CSV files. This makes it an ideal tool for merging small datasets stored in CSV format. With pandas, data scientists can quickly and easily clean and merge their data, giving them more time to focus on other aspects of their projects.
A real estate company is building a linear regression model to predict housing prices for different cities in the US. Which of the following is NOT a good metric to measure performance of their regression model?
Machine learning is a subset of data science that deals with the design and development of algorithms that can learn from and make predictions on data. Linear regression is a machine learning algorithm used to predict numerical values based on a linear relationship between input variables. When building a linear regression model, it is important to choose an appropriate metric to measure the performance of the model. The F1 score, R-squared value, and mean-squared error are all valid metrics for measuring the performance of a linear regression model. However, the mean absolute error is not a good metric to use for this purpose, as it does not take into account the direction of the prediction error (i.e., whether the predicted value is higher or lower than the actual value). As such, using the mean absolute error as a metric for evaluating the performance of a linear regression model could lead to inaccurate results.
A real estate company wants to provide its customers with a more accurate prediction of the final sale price for houses they are considering in various cities. To do this, the company wants to use a fully connected neural network trained on data from the previous ten years of home sales, as well as other features. What kind of machine learning problem does this situation most likely represent?
Answer: Regression
Which feature of Amazon SageMaker can you use for preprocessing the data?
Answer: Amazon Sagemaker Notebook instances
Amazon SageMaker enables developers and data scientists to build, train, tune, and deploy machine learning (ML) models at scale. You can deploy trained ML models for real-time or batch predictions on unseen data, a process known as inference. However, in most cases, the raw input data must be preprocessed and can’t be used directly for making predictions. This is because most ML models expect the data in a predefined format, so the raw data needs to be first cleaned and formatted in order for the ML model to process the data. You can use the Amazon SageMaker built-in Scikit-learn library for preprocessing input data and then use the Amazon SageMaker built-in Linear Learner algorithm for predictions.
What setting, when creating an Amazon SageMaker notebook instance, can you use to install libraries and import data?
Answer: LifeCycle Configuration
You work for the largest coffee chain in the world. You’ve recently decided to source beans from a new market to create new blends and flavors. These beans come from 30 different growers, in 3 different countries. In order to keep a consistent flavor, you have each grower send samples of their beans to your tasting baristas who rate the beans on 20 different dimensions. You now need to group the beans together so the supply can be diversified yet the flavor of the final product kept as consistent as possible. What is one way you could convert this business situation into a machine learning problem?
Answer:
In which phase of the ML pipeline does the machine learn from the data?
Answer: Model Training
A text analytics company is developing a text classification model to detect whether a document involves offensive content or not. The training dataset included ten non-offensive documents for every one offensive document. Their model resulted in an accuracy score of 94%. What can we conclude from this result?
Answer: Accuracy is the wrong metric here, because it can be heavily influenced by the large class (non-offensive documents).
A Machine Learning Engineer is creating a regression model for forecasting company revenue based on an internal dataset made up of past sales and other related data.
What metric should the Engineer use to evaluate the ML model?
Answer: Root Mean Squared error (RMSE)
Root Mean Square Error (RMSE) is the standard deviation of the residuals (prediction errors). Residuals are a measure of how far from the regression line data points are; RMSE is a measure of how spread out these residuals are. In other words, it tells you how concentrated the data is around the line of best fit.
An ML scientist has built a decision tree model using scikit-learn with 1,000 trees. The training accuracy for the model was 99.2% and the test accuracy was 70.3%. Should the Scientist use this model in production?
Answer: No, because it is not generalizing well on the test set
The curse of dimensionality relates to which of the following?
Answer: A – A high number of features in a dataset
The curse of dimensionality relates to a high number of features in a dataset.
Curse of Dimensionality describes the explosive nature of increasing data dimensions and its resulting exponential increase in computational efforts required for its processing and/or analysis. This term was first introduced by Richard E.
A Data Scientist wants to include “month” as a categorical column in a training dataset for an ML model that is being built. However, the ML algorithm gives an error when the column is added to the training data. What should the Data Scientist do to add this column?
Answer:
StandardScaler standardizes a feature by subtracting the mean and then scaling to unit variance. Unit variance means dividing all the values by the standard deviation. StandardScaler does not meet the strict definition of scale I introduced earlier.
What is the primary reason that one might want to pick either random search or Bayesian optimization over grid search when performing hyperparameter optimization?
Answer: Random search and Bayesian methods leave smaller unexplored regions than grid searches
A Data Scientist trained an XGBoost model to classify internal documents for further inquiry, and now wants to evaluate the model’s performance by looking at the results visually. What technique should the Data Scientist use in this situation?
Hi there! I'm a student and I'm trying to train a folder with 200 images with stylegan3 (I want to create a morphing video synchronized with music) But..I'm having some issues regarding the GPU. Can you recommend some valid alternatives? Thank you ! submitted by /u/Kash112 [link] [comments]
I'm writing a proof of concept for a RAG application for hundreds of thousands of textual records stored in a Postgres DB, using pgvector to store embeddings ( and using an HNSW index). Vector dimensions are specified correctly. Currently running experiments using varied chunk sizes for the text and comparing two different embedding models. (actual chunk size can vary a little because I am not breaking words to force a size). nomic-embed-text snowflake-arctic-embed-m-long Here's the gist experiment: 1- Create embeddings for "n" documents 2- Create a list of queries/prompts for information that is assuredly contained in SOME of those documents. Examples: What were the events that happened at "location x"? What is the John Doe's nickname? Who were the patients that checked into "hospital name"? Tell me about a requisition made by the director of sales. ... 3- For each query/prompt, I run a cosine distance query and get the the nearest 5 matching chunks. 4- After calculating the average distance for all queries/chunks, the lowest value is, in theory, the best combination of model/chunk_size. This worked SUPER well with a small sample of documents (say ≃ 200), but once I added more documents I started noticing an issue. Some of the NEW documents contain lists of literally 30k+ names. Whenever I ran a query that contains names, chunks from the lists above are returned, EVEN IF THEY DON'T CONTAIN THE NAMES, or any of the other information presented in the prompt (this happens regardless of the chose chunk size or strategy). My theory is that when a chunk containing names is embedded, the resulting embedding contain a strong vector for the semantic meaning of "name", but the vectors that differentiate that name from others can be relatively weak. A chunk containing almost nothing but references to the vector for "name" is then considered very similar to the prompt's embeddings, despite the names themselves not matching. For those of you with more experience/understanding, am I wrong in these assumptions? Would you have any suggestions/workarounds? I have some ideas but would like to see if anyone faced the same issues. submitted by /u/grudev [link] [comments]
I am trying to replace the key, query, and value in different prompts of the diffusion model for video editing. I want to understand why key, query, and value are effective and what they represent in the diffusion model. https://preview.redd.it/uoce1dh4rfvc1.png?width=1086&format=png&auto=webp&s=24d6504ca9c50d9f5924dd935204db6c15484a16 submitted by /u/Candid_Finish444 [link] [comments]
I asked a few of them (via Ollama) about WebGPU adoption and it turns out all of them are using old data. Here are the dates they gave me: wizardlm2:7b-q5_0: early 2023 LLAMA 3: August 2022 LLAMA 2: No date but gave similar answer to LLAMA 3 mistral: No date but very generic answer and I couldn't get it to divulge when it was updated last I also went online and asked ChatGPT and even it was January 2022. Are there newer models around? Edit: You can probably tell I'm new to this... submitted by /u/TheyreNorwegianMac [link] [comments]
I'm not an ML expert, but I work with some, and I've been asking around the (virtual) office, as well as interviewing scholars. Based on my research, I wrote an article you can read here. It seems to me that, while the hardware and software supporting LLMs will pretty certainly improve, the data presents a more complicated story. There's the issue of model collapse: essentially, the idea that as models approximate the distributions of original data sets with finite sampling, they will inevitably cut off the tails of those distributions. And as they begin to sample their own approximations in future model generations, this will lead to a collapse of the model (unless it can continue to tap that original data source). Then there's the issue of error propagation across generations of LLMs. Mark Kon, at Boston University, suggests tools like watermarking to help keep our datasets clean moving forward (he described the problem as a bigger mouse/bigger mousetrap situation). Mike Chambers, one of my colleagues at AWS, basically argued as much or more can be accomplished at this point by cleaning our datasets as by ingesting ever more data. One related, long term takeaway is that LLMs and other models will probably start working to ingest new categories of data (beyond text and image) before too long. And that next paradigm shift is going to happen sooner than many of us think. Thoughts? submitted by /u/thedaveperry1 [link] [comments]
Hello I am trying to implement this from a paper: First, select the first l sampling points in the sampling points of bearing faults and calculate the mean μ_rms and standard deviation σ_rms of their root mean square values, and establish a 3σ criterion- based judgment interval [μ_rms − 3σ_rms, μ_rms +3σ_rms] accordingly. 2) Second, calculate the RMS index for the l + 1 th point FPTl+1 and compare it with the decision interval in step 1. If its value is not in this range, then recalculate the judgment interval after making l =l + 1. If its value is within this range, a judgment is triggered once. 3) Finally, in order to avoid false triggers, three consecutive triggers are used as the identification basis for the final FPT, and make this time FPTl = FPT The paper title: Physics guided neural network: Remaining useful life prediction of rolling bearings using long short-term memory network through dynamic weighting of degradation process My question is: how do I get the μ_rms and σ_rms from the RMS? What I did in this case was first sample the data and then calculate the RMS on the samples. But then I recreate sequences from these RMS values (which doesn't seem logical to me) and then calculate the μ_rms and σ_rms. I do use this value I obtain to do the interval and compare it with the RMS value. But the problem is that by doing this, it triggers way too early. This is the code I have made: def find_fpt(rms_sample, sample): fpt_index = 0 trigger = 0 for i in range(len(rms_sample)): upper = np.mean(rms_sample[i] + 3 * np.std(rms_sample[i])) lower = np.mean(rms_sample[i] - 3 * np.std(rms_sample[i])) rms = np.mean(np.square(sample[i + 1]) ** 2) if upper > rms > lower: if trigger == 3: fpt_index = i break trigger += 1 else: trigger = 0 print(trigger) return fpt_index def sliding_window(data, window_size): return np.lib.stride_tricks.sliding_window_view(data, window_size) window_size = 20 list_bearing, list_rul = load_dataset_and_rul() sampling = sliding_window(list_bearing[0][::100], window_size) rms_values = np.sqrt(np.mean(np.square(sampling) ** 2, axis=1)) rms_sample = sliding_window(rms_values, window_size) fpt = find_fpt(rms_sample,sampling) submitted by /u/Papytho [link] [comments]
so i was experimenting with tabnet architecture by google https://arxiv.org/pdf/1908.07442.pdf and found that if the data has a lot of randomness and noice then only it can outperform based on my dataset, but traditional machine learning algo like xgboost, random forest do a better job at those dataset where the features are robust enough but they fail the zero shot test and the transformer show some accuracy in that, so i just wanted to check if its possible to merge both of the traditional techniques and the transformer architecture so that it can perform better at traditional ml algo datasets and also give a good zero shot accuracy. while trying to merge it i found that in the tabnet paper they assume that each feature is independent and do not provide any place for any relationship with the features itself but the Tabtransformer architecture takes it into account https://arxiv.org/pdf/2012.06678.pdf as well but doesnt have any feature selection as proposed in tabnet.... i tried to merge them but was stuck where i have to do feature selection on the basis of the dimension assigned to each feature, while this work i s done by sparsemax in the tabnet paper i cant find a way to do that... any help would be appreciated submitted by /u/Shoddy_Battle_5397 [link] [comments]
Ansys has released an AutoML product for physical simulation called Ansys Sim AI (https://www.ansys.com/fr-fr/news-center/press-releases/1-9-24-ansys-launches-simai). As a machine learning engineer, I wonder what types of models can be used to train on 3D mesh data in STL format with physical fields. How can the varying dimensions of input and output data be managed for different geometric objects? Does anyone have any ideas on this topic? submitted by /u/SatieGonzales [link] [comments]
I have been working on a project for the past couple of months, and I wanted to know if anyone had feedback or thoughts to fuel its completion. I built a lexer and parser using python and C tokens to create a language that reads a python script or file and utilizes hooks to amend or write new lines. It will be able to take even a blank Python file to write, test, and deliver a working program based on a single prompt provided initially by the user. The way it works is it uses GPTs API to call automated prompts that are built into the program. It creates a program by itself by only using 1 initial prompt by the user on the program. It is a python program with the language I named autoscripter built into it. I hope to finish it by the end of the year if not into next year. This is a very challenging project, but I believe it is the future of scripting, and I have no doubts Microsoft will release something on this sooner than later. Any thoughts? I created this first by designing a debugger that error corrected python code and realized that not only error correction could be automated, but also the entire scripting process could be left to a lot of automation. submitted by /u/starcrashing [link] [comments]
I am working on a project regarding marketing of AI powered products in Retail stores. I am trying to find some products that market ‘AI’ as the forefront feature, eg Samsung’s BeSpoke AI series, Bmw’s AI automated driving etc. Need them to be physical products so I can go to stores and do research and survey. Any kind of help is appreciated. submitted by /u/Complete-Holiday-610 [link] [comments]
Latest releases of models such as GPT-4 and Claude have a significant jump in the maximum context length (4/8k -> 128k+). The progress in terms of number of tokens that can be processed by these models sound remarkable in % terms. What has led to this? Is this something that's happened purely because of increased compute becoming available during training? Are there algorithmic advances that have led to this? submitted by /u/analyticalmonk [link] [comments]
I'm a recent engineering graduate who's switching roles from traditional software engineering ones to ML/AI focused ones. I've gone through an introductory probability course in my undergrad, but the recent developments such as diffusion models, or even some relatively older ones like VAEs or GANs require an advanced understanding of probability theory. I'm finding the math/concepts related to probability hard to follow when I read up on these models. Any suggestions on how to bridge the knowledge gap? submitted by /u/AffectionateCoyote86 [link] [comments]
Say I have 1000 PDF docs that I use as input to a RAG Pipeline. I want to to evaluate different steps of the RAG pipeline so that I can measure: - Which embedding models work better for my data? - Which rerankers work and are they required? - Which LLMs give the most factual and coherent answers? How do I evaluate these steps of the pipeline? Based on my research, I found that most frameworks require labels for both retrieval and generation evaluation. How do I go about creating this data using a LLM? Are there any other techniques? Some things I found: For retrieval: Use a LLM to generate synthetic ranked labels for retrieval. Which LLM should I use? What best practices should I follow? Any code that I can look at for this? For Generated Text: - Generate Synthetic labels like the above for each generation. - Use a LLM as a judge to Rate each generation based on the context it got and the question asked. Which LLMs would you recommend? What techniques worked for you guys? submitted by /u/awinml1 [link] [comments]
Hi everyone. I want to build this idea of mine for a class project, and I wanted some input from others. I want to build an AI algorithm that can play the game Drift Hunters (https://drift-hunters.co/drift-hunters-games). I imagine I have to build some reinforcement learning program, though I'm not sure exactly how to organize state representations and input data. I also imagine that I'd need my screen to be recorded for a continuous period of time to collect data. I chose this game since it's got three very basic commands(turn left, turn right, and drive forward) and the purpose of the game(which never ends) is to maximize drift score. Any ideas are much appreciated. lmk if u still need more info. Thanks everyone. submitted by /u/Valuable-Wishbone276 [link] [comments]
PDF: https://arxiv.org/abs/2404.11457 GitHub: https://github.com/KID-22/LLM-IR-Bias-Fairness-Survey Abstract: With the rapid advancement of large language models (LLMs), information retrieval (IR) systems, such as search engines and recommender systems, have undergone a significant paradigm shift. This evolution, while heralding new opportunities, introduces emerging challenges, particularly in terms of biases and unfairness, which may threaten the information ecosystem. In this paper, we present a comprehensive survey of existing works on emerging and pressing bias and unfairness issues in IR systems when the integration of LLMs. We first unify bias and unfairness issues as distribution mismatch problems, providing a groundwork for categorizing various mitigation strategies through distribution alignment. Subsequently, we systematically delve into the specific bias and unfairness issues arising from three critical stages of LLMs integration into IR systems: data collection, model development, and result evaluation. In doing so, we meticulously review and analyze recent literature, focusing on the definitions, characteristics, and corresponding mitigation strategies associated with these issues. Finally, we identify and highlight some open problems and challenges for future work, aiming to inspire researchers and stakeholders in the IR field and beyond to better understand and mitigate bias and unfairness issues of IR in this LLM era. https://preview.redd.it/3glvv92v6dvc1.png?width=2331&format=png&auto=webp&s=af66f2bf082620882f09ea744eda88cf06c67112 https://preview.redd.it/d48pt3sw6dvc1.png?width=1126&format=png&auto=webp&s=2343460399473bde3f5e37c0bbcfdc88ffc81efb submitted by /u/KID_2_2 [link] [comments]
So, nowadays, everyone is distilling rationales gathered from a large language model to another relatively smaller model. However, I remember from the old days that we did we train the small network to match the logits of the large network when doing distillation. Is this forgotten /tried and did not work today? submitted by /u/miladink [link] [comments]
Lincoln Laboratory researchers are using AI to get a better picture of the atmospheric layer closest to Earth's surface. Their techniques could improve weather and drought prediction.
Google Cloud’s BigQuery is a powerful tool for storing and querying large data sets. However, sometimes you may need to export data from BigQuery in order to perform additional analysis or simply to have a backup. Thankfully, Google Cloud makes it easy to export data from BigQuery to a CSV file.
The first step is to select the dataset that you want to export.
Next, click on the “Export Table” button. In the pop-up window, select “CSV” as the file format and choose a location to save the file.
Finally, click on the “Export” button and Google Cloud will begin exporting the data.
Depending on the size of the data set, this may take several minutes. Once the export is complete, you will have a CSV file containing all of the data from BigQuery.
This will export your data to a CSV file in Google Cloud Storage. You can then download the file from Google Cloud Storage and use it in another program. Alternatively, you can use the “bq load” command to load your data directly into another Google Cloud service, such as Google Sheets.
What is the Difference Between Mini-Batch and Full-Batch in Machine Learning?
In the field of machine learning, there are two types of batch sizes that are commonly used: mini-batch and full-batch. Both have their pros and cons, and the choice of which to use depends on the situation. Here’s a quick rundown of the differences between mini-batch and full-batch in machine learning.
Mini-Batch Machine Learning Mini-batch machine learning is a type of batch processing where the data is divided into small batches before being fed into the machine learning algorithm. The advantage of mini-batch machine learning is that it can provide more accurate results than full-batch machine learning, since the data is less likely to be affected by outliers. However, the disadvantage of mini-batch machine learning is that it can be slower than full-batch machine learning, since each batch has to be processed separately.
Full-Batch Machine Learning Full-batch machine learning is a type of batch processing where the entire dataset is fed into the machine learning algorithm at once. The advantage of full-batch machine learning is that it is faster than mini-batch machine learning, since all the data can be processed simultaneously. However, the disadvantage of full-batch machine learning is that it can be less accurate than mini-batch machine learning, since outliers in the dataset can have a greater impact on the results.
So, which should you use? It depends on your needs. If accuracy is more important than speed, then mini-batch machine learning is the way to go. On the other hand, if speed is more important than accuracy, then full-batch machine learning is the way to go.
The Difference Between Mini-Batch and Full-Batch Learning
In machine learning, there are two main types of batch learning: mini-batch and full-batch. Both types of batch learning algorithms have their own pros and cons that data scientists should be aware of. In this blog post, we’ll take a look at the difference between mini-batch and full-batch learning so you can make an informed decision about which type of algorithm is right for your project.
Mini-batch learning is a type of batch learning that operates on small subsets of the training data, typically referred to as mini-batches. The advantage of mini-batch learning is that it can be parallelized across multiple processors or devices, which makes training much faster than full-batch training. Another advantage is that mini-batches can be generated on the fly from a larger dataset, which is especially helpful if the entire dataset doesn’t fit into memory. However, one downside of mini-batch learning is that it can sometimes lead to suboptimal results due to its stochastic nature.
Full-Batch Learning Full-batch learning is a type of batch learning that operates on the entire training dataset at once. The advantage of full-batch learning is that it converges to the global optimum more quickly than mini-batch or stochastic gradient descent (SGD) methods. However, the disadvantage of full-batch learning is that it is very slow and doesn’t scale well to large datasets. Additionally, full-batch methods can’t be parallelized across multiple processors or devices due to their sequential nature.
So, which type of batch learning algorithm is right for your project? If you’re working with a small dataset, then full-batch learning might be your best bet. However, if you’re working with a large dataset or need to train your model quickly, then mini=batch or SGD might be better suited for your needs. As always, it’s important to experiment with different algorithms and tuning parameters to see what works best for your particular problem.
Welcome to AWS Certification Machine Learning Specialty (MLS-C01) Practice Exams!
This book is designed to help you prepare for the AWS Certified Machine Learning – Specialty (MLS-C01) exam and earn your AWS certification. The AWS Certified Machine Learning – Specialty (MLS-C01) exam is designed for individuals who have a strong understanding of machine learning concepts and techniques, and who can design, build, and deploy machine learning models on the AWS platform.
In this book, you will find a series of practice exams that are designed to mimic the format and content of the actual MLS-C01 exam. Each practice exam includes a set of multiple choice and multiple response questions that cover a range of topics, including machine learning concepts, techniques, and algorithms, as well as the AWS services and tools used to build and deploy machine learning models.
By working through these practice exams, you can test your knowledge, identify areas where you need further study, and gain confidence in your ability to pass the MLS-C01 exam. Whether you are a machine learning professional looking to earn your AWS certification or a student preparing for a career in machine learning, this book is an essential resource for your exam preparation.
What is the best Japanese natural language processing (NLP) library?
NLP is a field of computer science and artificial intelligence that deals with the interactions between computers and human languages. NLP algorithms are used to process and analyze large amounts of natural language data. Japanese NLP libraries are used to develop applications that can understand and respond to Japanese text.
The best Japanese NLP library depends on your application’s needs.
For example, if you are developing a machine translation application, you will need a library that supports word sense disambiguation and part-of-speech tagging. If you are developing a chatbot, you will need a library that supports sentence analysis and dialogue management. In general, Japanese NLP libraries can be divided into three categories: rule-based systems, statistical systems, and hybrid systems.
Rule-based systems rely on linguistic rules to process language data.
Statistical systems use statistical models to process language data.
Hybrid systems use a combination of linguistic rules and statistical models to process language data.
The best Japanese NLP library for your application will depend on the type of NLP tasks you need to perform and your resources (e.g., time, data, computing power).
XGBoost is a powerful tool that has a wide range of applications in the real world. XGBoost is a machine learning algorithm that is used to improve the performance of other machine learning algorithms.
XGBoost has been used to improve the performance of data science models in a variety of fields, including healthcare, finance, and retail.
In healthcare, XGBoost has been used to predict patient outcomes, such as length of stay in the hospital and mortality rates.
In finance, XGBoost has been used to predict stock prices and credit card fraud.
In retail, XGBoost has been used to improve customer segmentation and product recommendations.
XGBoost is a versatile tool that can be used to improve the performance of machine learning models in many different fields.
You are given an array arr of N integers. For each index i, you are required to determine the number of contiguous subarrays that fulfills the following conditions:
The value at index i must be the maximum element in the contiguous subarrays, and
These contiguous subarrays must either start from or end on index i.
Signature int[] countSubarrays(int[] arr)Input
Array arr is a non-empty list of unique integers that range between 1 to 1,000,000,000
Size N is between 1 and 1,000,000
Output An array where each index i contains an integer denoting the maximum number of contiguous subarrays of arr[i]Example: arr = [3, 4, 1, 6, 2] output = [1, 3, 1, 5, 1]Explanation:
For index 0 – [3] is the only contiguous subarray that starts (or ends) with 3, and the maximum value in this subarray is 3.
For index 1 – [4], [3, 4], [4, 1]
For index 2 – [1]
For index 3 – [6], [6, 2], [1, 6], [4, 1, 6], [3, 4, 1, 6]
For index 4 – [2]
So, the answer for the above input is [1, 3, 1, 5, 1]
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Rotational Cipher: One simple way to encrypt a string is to “rotate” every alphanumeric character by a certain amount. Rotating a character means replacing it with another character that is a certain number of steps away in normal alphabetic or numerical order. For example, if the string “Zebra-493?” is rotated 3 places, the resulting string is “Cheud-726?”. Every alphabetic character is replaced with the character 3 letters higher (wrapping around from Z to A), and every numeric character replaced with the character 3 digits higher (wrapping around from 9 to 0). Note that the non-alphanumeric characters remain unchanged. Given a string and a rotation factor, return an encrypted string.
Return the result of rotating input a number of times equal to rotationFactor.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
Hadoop is an open-source software framework for storing data and running applications on clusters of commodity hardware. It provides massive storage for any kind of data, enormous processing power and the ability to handle virtually limitless concurrent tasks or jobs. Apache Hadoop is used mainly for Data Analysis.
The question is Which programming language is good to drive Hadoop and Spark?
The programming model for developing hadoop based applications is the map reduce. In other words, MapReduce is the processing layer of Hadoop. MapReduce programming model is designed for processing large volumes of data in parallel by dividing the work into a set of independent tasks. Hadoop MapReduce is a software framework for easily writing an application that processes the vast amount of structured and unstructured data stored in the Hadoop Distributed FileSystem (HDFS). The biggest advantage of map reduce is to make data processing on multiple computing nodes easy. Under the Map reduce model, data processing primitives are called Mapper and Reducers.
Spark is written in Scala and Hadoop is written in Java.
Get 20% off Google Google Workspace (Google Meet) Standard Plan with the following codes: 96DRHDRA9J7GTN6 Get 20% off Google Workspace (Google Meet) Business Plan (AMERICAS): M9HNXHX3WC9H7YE (Email us for more codes)
The key difference between Hadoop MapReduce and Spark lies in the approach to processing: Spark can do it in-memory, while Hadoop MapReduce has to read from and write to a disk. As a result, the speed of processing differs significantly – Spark may be up to 100 times faster.
In-memory processing is faster when compared to Hadoop, as there is no time spent in moving data/processes in and out of the disk. Spark is 100 times faster than MapReduce as everything is done here in memory.
Spark’s hardware is more expensive than Hadoop MapReduce because it’s hardware needs a lot of RAM.
Hadoop runs on Linux, it means that you must have knowldge of linux.
Java is important for hadoop because:
There are some advanced features that are only available via the Java API.
The ability to go deep into the Hadoop coding and figure out what’s going wrong.
In both these situations, Java becomes very important. As a developer, you can enjoy many advanced features of Spark and Hadoop if you start with their native languages (Java and Scala).
Simple syntax– Python offers simple syntax which shows it is more user friendly than other two languages.
Easy to learn – Python syntax are like English languages. So, it much more easier to learn it and master it.
Large community support – Unlike Scala, Python has huge community (active), which we will help you to solve your queries.
Offers Libraries, frameworks and packages – Python has huge number of Scientific packages, libraries and framework, which are helping you to work in any environment of Hadoop and Spark.
Python Compatibility with Hadoop – A package called PyDoop offers access to the HDFS API for Hadoop and hence it allows to write Hadoop MapReduce program and application.
Hadoop is based off of Java (then so e.g. non-Hadoop yet still a Big-Data technology like the ElasticSearch engine, too – even though it processes JSON REST requests)
Spark is created off of Scala although pySpark (the lovechild of Python and Spark technologies of course) has gained a lot of momentum as of late.
If you are planning for Hadoop Data Analyst, Python is preferable given that it has many libraries to perform advanced analytics and also you can use Spark to perform advanced analytics and implement machine learning techniques using pyspark API.
The key Value pair is the record entity that MapReduce job receives for execution. In MapReduce process, before passing the data to the mapper, data should be first converted into key-value pairs as mapper only understands key-value pairs of data. key-value pairs in Hadoop MapReduce is generated as follows:
Today I Learned (TIL) You learn something new every day; what did you learn today? Submit interesting and specific facts about something that you just found out here.
submitted by /u/Outrageous-Elk-5392 [link] [comments]
Reddit Science This community is a place to share and discuss new scientific research. Read about the latest advances in astronomy, biology, medicine, physics, social science, and more. Find and submit new publications and popular science coverage of current research.