Below is an aggregated list of some coding anti-patterns that can easily slip through code reviews.
- Comments: We all want to write meaningful comments to explain our code, but what if someone writes 4 paragraphs of comments explaining exactly what a piece of code does? This will have no problem passing through the code review, but it creates frustration for developers who need to maintain the code because every time I need to change a piece of the code, well I have to go through the 4 paragraphs and maybe rewrite the whole thing, so screw it, I’m not touching that code.
- SRP: We want out code to respect the Single Responsibility Principle, we want developers to write small unit of logic that can be easily testable, but what happens when you write too many units? This will have no problem passing the code review, and if someone asks, you can just tell them you wrote the code to be easily testable but then once you go over a certain threshold, it becomes frustrating to jump between 20 methods, in 10 classes just to do a simple task. It become the real spaghetti code.
SRP is a principle, not a pattern. From my experience, DRY should guide one to OCP and OCP to SRP.
The acronyms are explained here SOLID principles (plus DRY, YAGNI, KISS and other YAA)
- Indifferent Architecture: You like a framework, so you use it for you next project and you don’t think much about it. You put all the Controllers in the Controllers folder, all the Services in the Service folder, all the Helpers in the Helpers folder and because frameworks (Rails, Laravel..) operates with a certain level of magic, the simple act of putting your Model in the Models folder will give you a certain level of assistance that you will love… This will have no problem slipping through the code review because guess what, you’re following the framework’s guidelines, but fast forward a few months and you end up with this monolith that we all like to hate and then your developers start hating on monoliths and want to go micro services… The real issue is not the monolith, the real issue was the lack of design and architecture.
The biggest anti-pattern that will slip through code reviews very easily is the singleton pattern. It is an anti-pattern for two reasons:
- What is unique today may be duplicate tomorrow: the classic case here is that 20 years ago we used to have one screen per workstation, today two or even three and four screens are increasingly common. This means that if your development environment uses a singleton for the screen now you are in trouble!
- Even if you really have just one (say, a configuration file), the implementations flying around are absolutely horrific 99.99% of the time
Right, so, why is the mainstream implementation horrific? Here is what people will generally do: because the pattern says that there must be only one instance of a class, they will hide the constructor and instead have a static method called “getInstance” or something similar to create the class and reuse it across the board.
That is the wrong way to go about it. What you should be doing is this instead:
- Make the entire singleton class private
- Have a normally allocatable class made public
- In the public class’ implementation (which has to reside in the same file) create the private class as required (maybe as a static field! That is completely fine)
- Use the public class
This is how you should do a singleton, but that is not what you see around. The net result of the common implementation is a hidden dependency on the singleton, which then means a lot of stuff cannot be tested properly without bringing the singleton in (so you can’t, for example, easily mock it).
Please stop doing singletons or, if you can’t, please do get them right.
Code reviews are really important. However, without a good set of coding standards, they can often become “this is my preference”.
Here’s my suggestion on how to avoid anti-patterns slipping through code reviews:
- Read through Martin Fowler’s book “Refactoring”.
- As a team, figure out what people think are anti-patterns.
- Agree on a list. Define these anti-patterns in your coding standards.
- Make sure everyone reads the coding standards, and can access it easily.
- Then, you have given one another permission to call each other out when that class gets too large, or the method gets too long, or the method has too many parameters.
Bad Code:
- Lots of comments
- Meaningless names
- long methods
- methods that does many things
- code that is hard to write unit test for
- code that doesn’t have unit test
- code that is tightly coupled to other code
- code that isn’t S.O.L.I.D.
- clever code
- unreadable code.
Good Code:
- Code that makes sense to another program or your future self in 6 months.
- Statement, Methods and object has a single responsibility

The bad code has correct logic, but without comments leaves the reader guessing at the meaning.
The good code not only clearly documents what is being done, but gives a good idea of why.
Now how would one refactor these snippets to add a remote location bonus?
If you really want to know how to write good code check out “Clean Code: …” by Robert “Uncle Bob” Martin.
What’s the coolest coding pattern you’ve seen?
“Early exit” — the coolest and simplest thing.

The idea is to exit the code block as soon as you can. A few bonuses arise from this pattern:
- Your code is likely more focused on the purpose of the block. Better at avoiding a kind of “run-on sentence” type of programming.
- Reduced nesting. The same exact code can be written where the complicated code is within a nested bracket given a condition, but this helps keep your more complicated code at the tail end instead of nested near the top of a function.
- Helpful to reinforce the fact that validation and parameter checking should be done first. You get used to it and functions start to look weird if they don’t validate input parameters.
- Much easier for others to debug your code. Most of the validation is near the top. Less mental brainpower needed because the code is a bit more readable.
Personally, I really like how it makes my code look like block paragraphs. It makes it easy to skim and read quickly.
From a distance you can see how it forms blocky paragraphs.
What are some popular programming anti-patterns?
- One function/file to rule them all. This is common in C/C++ for programmers who are still in the early stages of learning how to organize code. They will start filling either a single function (e.g. “main”) or at least a single file with their entire project’s code. This is not a bad way to start a project; I still do this myself. The problem comes when the programmer fails to realize that the code is becoming too large for the most basic organization strategy and keeps filling one container with all their code.
- Too classy: Every single object gets its own class with constructors and methods for things which will never actually be needed. This is a textbook example of a programmer who has read a textbook on OOP but hasn’t been shown what good OOP code looks like.
- The god object: There is one class with one instantiation which has its fingers in every single part of the program. It manages memory, maintains logs, synchronizes threads, and sends the manager his TPS reports for the day. Basically this is an OOP version of example 1 above, but is something you might still see in poorly maintained code.
- Balkanization: The number of classes, files, and folders in your project is directly proportional to the number of developers, specifically because they do not cooperate on the same code and have balkanized the code base into a piece for each developer. This is a behavioral sink for software development in response to poor job security. What better way to secure your position in a company than for you to be the only person in the entire company who understands your code, and what better way to be the only person who understands your code than to be the only person who reads it?
- OOP: Object orientation is almost always a stopgap measure to stop bad programmers from doing too much damage to a large code base. Given competent programmers, functional/procedural generic programming with lean data types is more scalable than OOP for the vast majority of projects. This is well illustrated by many C++ projects, where template programming is the actual backbone of the project with classes serving as a light layer of icing on the cake.
- Fire and forget: How many times have you personally stumbled onto code that you yourself wrote not too long ago only to realize that you don’t understand how it works anymore? It happens to most programmers often enough that they resent having to edit old code. This can be remedied by explicitly writing down detailed documentation in the comments of your own code with the idea of communicating the actual purpose and design of your code for not just a stranger, but yourself in the future.
What are some best practices for code review?
SmartBear Software company published a small white-paper with 11 good practices for an effective code review process:
- Review fewer than 200-400 lines of code (LOC) at a time: More then 400 LOC will demand more time, and will demoralise the reviewer who will know before hand that this task will take him an enormous amount of time.
- Aim for an inspection rate of less than 300-500 LOC/hour: It is preferable to review less LOC but to look for situations such as bugs, possible security holes, possible optimisation failures and even possible design or architecture flaws.
- Take enough time for a proper, slow review, but not more than 60-90 minutes: As it is a task that requires attention to detail, the ability to concentrate will drastically decrease the longer it takes the task to complete. From personal experience, after 60 minutes of effective code review, or you take a break (go for a coffee, get up from the chair and do some stretching, read an article, etc.), or you start being complacent with the code on sensitive matters such as security issues, optimisation, and scalability.
- Authors should annotate source code before the review begins: It is important for the author to inform colleagues which files should be reviewed, preventing previously reviewed code from being validated again.
- Establish quantifiable goals for code review and capture metrics so you can improve your processes: it is important that the management team has a way of quantifying whether the code review process is effective, such as accounting for the number of bugs reported by the client.
- Checklists substantially improve results for both authors and reviewer: What to review? Without a list, each engineer can search for something in particular and leave forgotten other important points.
- Verify that defects are actually fixed! It isn’t enough for a reviewer to indicate where the faults are or to suggest improvements. And it’s not a matter of trusting colleagues. It’s important to validate that, in fact, the changes where well implemented.
- Managers must foster a good code review culture in which finding defects is viewedpositively. It is necessary to avoid the culture of “why you didn’t write it well in the first time?”. It’s important that zero bugs are found in production. The development and revision stage is where they are to be found. It is important to have room for an engineer to make a mistake. Only then can you learn something new.
- Beware the “Big Brother” effect: Similar to point 8, but from the engineer’s perspective. It is important to be aware that the suggestions or bugs reported in code reviews are quantifiable. This data should serve the managers to see if the process is working or if an engineer is in particular difficulty. But should never be used for performance evaluations.
- The Ego Effect: Do at least some code review, even if you don’t have time to review it all: Knowing that our code will be peer reviewed alerts us to be more cautious in what we write.
- Lightweight-style code reviews are efficient, practical, and effective at finding bugs: It’s not necessary to enter in the procedure described by IBM 30 years ago, where 5-10 people would close themselves for periodic meetings with code impressions and scribble each line of code. Using tools like Git, you can participate in the code review process, write and associate comments with specific lines, discuss solutions through asynchronous messages with the author, etc.
Source: Quora
What are the best code review tools?
This is a bit longer answer to the question – tool recommendations are in the end.
During the last 6-7 years I’ve evaluated various code review tools, including:
- Atlassian Crucible (SVN, CVS and Perforce)
- Google Gerrit (for Git)
- Facebook Phabricator Differential (Git, Hg, SVN)
- SmartBear Code Collaborator (supports pretty much anything)
- Bitbucket code comments
- Github code comments
At some point I’ve also just manually reviewed patches which were e-mailed after each commit/push.
I’ve tried many variations of the code review process:
- pre-commit vs. post-commit
- collecting various metrics & continuously trying to optimize the process vs. keeping it as simple as possible
- making code review required for every line vs. letting developers to decide what to review
- using checklists vs. relying on developers’ experience-based intuition
Based on my experience with the code review process itself and the tools mentioned above, within the context of a small software company, I would make the following three points about code reviews:
- Code reviews are very useful and should be conducted even in software which may not be very “mission critical”. The list of benefits is too long to discuss here in detail, but short version: supplementing testing/QA by ensuring quality and reducing rework, sharing knowledge about code, architecture and best practices, ensuring consistency, increasing “bus count”. It’s well worth the price of 10-20% of each developer’s time.
- Code reviews shouldn’t require use of a complex tool (some of which require maintenance by their own) or a time-consuming process. Preferably, no external tool at all.
- Code reviews should be natural part of development process of each and every feature.
Based on those points, I would recommend the following process & tools:
- Use Bitbucket or Github for your source control
- Use hgflow/gitflow (or similar) process for your product development
- The author creates Pull Request for a feature branch when it’s ready for review. The author describes the Pull Request to the reviewer either in PR comments (with prose, diagrams etc) or directly face-to-face.
- The reviewer reviews the Pull Request in Bitbucket/Github. A discussion can be had as Github/Bitbucket comments on PR level, on code level, face-to-face or combining all of those.
- When the review is done, feature branch is merged in.
- Every feature goes through the same process
So, my recommended tools are the same you should be using for your source code control:
- Bitbucket Pull Requests
- Github Pull Requests
- Atlassian Stash Pull Requests (if you need to keep the code in-house)
What are some checks you always do on your code before you submit it for code review?
- Unit tests are above the minimum threshold
- Consistent naming convention with rest of codebase
- No duplication of functionality
- Properly linted/formatted code
Code Review Checklist :
- Logic : Whether your logic is correct according to the use cases?
- Performance : Check if there is a better approach/algorithm to solve the use case?
- Testing : Whether unit tests [3]have been written? Do they cover all the scenarios and edge cases? Whether manual feature tests/ integration tests[4] have been performed? ( I usually omit the integration tests to be written at the time of code-review, I think it’s quite early. I am fine if the changes have been tested in a local stack )
- SOR : I call this separation of responsibility. Is there necessary control abstraction[5] in your low level design? How modular is your codebase? Is there a DAO layer before the database? If there is a client layer? Is there a manager layer? How have you handled exceptions? Who is taking care of logging? How generic can their methods be? What kind of methods should they expose and what responsibility should they own at each level? Probably, this is the best place to inject your knowledge of Design Patterns[6]. Also, this component decides how generic[7], scalable[8] and extensible[9] your system can be.
- Readability : Short and descriptive variable/method names. Strong use of standard verbiage without any grammatical mistakes. Method size kept small. Proper naming convention throughout the package be it camel case[10] or snake case[11]. Consistent naming of variables. Do not refer the same entity differently at different places in your code, avoid unnecessary confusion. Define scope[12] of every class/method/variable and make judgements of adding a new class/method thinking of who is going to use it? and who is not going to use it?
- Automation : If there are few lines of code being written at multiple places, move them to a method/utility. Avoid redundancy. Make the best use of reusability[13].
- Documentation : Draft the HLD/LLD over a wiki or a document. The key design decisions, the Proof-of-concepts[14], the reviews/suggestions by senior developers should always be consolidated at one single place. Although this point is not relevant for all the code-reviews but for the key implementation reviews, this serves as a recipe for the reviewer. Apart from these high level docs, make sure that you have javadocs/scaladocs[15] for all the public methods. Avoid comments as much as possible, make your code self explanatory.
- Best Practices : Read the manuals/ articles/ research papers. ( very few scenarios ) of the frameworks consumed. Be an ardent visitor of Stack Overflow[16] and check for the best ways to implement a certain complex usecase and how the code abides by it.
Footnotes
I spend quite a bit of time reviewing code and some of the common problems I found are :-
- Over architecture by creating lot of superficial interfaces
- Premature optimization of code
- Reinventing the wheel when something like this exists in open source or inside the codebase already.
- coming up with a totally new pattern for doing things when such problem is already solved in code.
- Trying hard to fit a design pattern into a code where its not needed (just because you read it few days back)
- Very long variable names
- Typos in variable names
- No comments(I am ok with this if code is written like a book but sometimes you are writing something complex like an algorithm that wont make sense to someone newbie and leaving a one liner comment about your decision process would help people why you are doing it).
- Lack of enough tests in new code.
- No tests or borderline tests when mutating legacy code. Also no effort to make legacy code better.
- Wrong technology choice
- Introducing SPOF in architecture
- Typical database schema issues
- Missing indexes
- Typos, using java conventions for db field names or mismatched conventions with existing field names
- very long column names
- Wrong datatype like strings for date or varchar(1) for boolean
- Too bigger or too limited field lengths
Where can I have my code reviewed?
Since you’re looking to review your whole project, Stack Overflow , the Code Review Stack Exchange, and programming subreddits won’t work.
Here are some options that will help a non-technical person such as yourself:
Freelancers and Agencies
Consider hiring a more experienced freelancer or agency to review your outsourced team’s code. You might even be able to hire a local software developer to review their work.
- UpWork, Freelancer, Fiverr, Toptal, Codementor, etc. – With rates for code review as cheap as $10/hour, there’s a range of quality.
- Development Agencies – There are thousands of software development agencies around the world that offer code review. Similar to hiring freelancers, they start at around $10/hour. See this Quora question for tips for choosing a software development company. Be sure to read through the checklist for vetting and hiring them.
On-demand Code Review
If you want a professional option then look at PullRequest.com. It’s a platform for on-demand code review that works with GitHub, Bitbucket, or GitLab to provide code quality feedback from vetted reviewers. They can review your project for bugs, security issues, code maintainability, and code quality issues.
Algorithm and Tricks to save up to 30 cents per litre on Gas in USA and Canada
Gas is getting very expensive and we are trying to help consumers save on Gas by providing you daily tricks to help you save up to 30 cents per litre on Gas in USA and Canada.
Tricks to save up to 30 cents per litre on Gas in USA and Canada
1- Go shop for Food at Safeway and get an automatic 15 cents per litre discount at Safeway Fueling stations
2- To get 30 cents discount at Safeway Fuel stations, use the code below based on Epoch:
[Day]-800-[random 5digits]
Example:
- Today is June 27 2022, so the Day is: 179
- A random 5 digits is 35364 (Change the 5 digits if it doesn’t work. )
- So a Coupon to save 30 cents per litre at Safeway Gas Station on June 26, 2022 is:
- 179-800-35364 (Remember to change the random 5 digits until it works)
3. Purchase Discount Gift Cards for Gas
Rewards card – Cashback
You can discover a great deal of rebate gift vouchers for gas on the web. These will work all things considered Shell, Gulf, and Mobil stations. They will spare a couple of dollars for each buy, yet that can add up to enormous reserve funds on a yearly premise.
The Optimum program is one of the better value points programs. And the points convert to cash discounts on stuff you buy every day, rather than air travel and catalogues full of slightly aged-out consumer trinkets that you don’t really need.
If you are a Costco member and also optimum member, which option gives you the most savings?
From a quick google of prices in my area it looks like the average price is around $2/L and Costco is currently around $1.75. The value of the Optimum program is more that you can keep your eye out for specials and earn points which can then be put toward gas purchases. But the basic earnings of 10 pts/litre (1¢ equivalent) and redeem up to 4,000 pts ($4 equivalent) aren’t anywhere near 25¢/litre. If you don’t mind the lines 😉
If you have one near, try to fuel up at Mobil gas instead of Esso. Esso provides 15 points per liter, Mobil gas provides 35 points per liter.
I used to have a work vehicle that I filled with Mobil gas, on the company credit card, got approx. 30 dollars of free groceries from Loblaws every week because of this practice.
Which card gives 10% cash back at the moment?
TD , CIBC and Scotia all have one right now. It’s 10% cashback on purchases up to $2000 in the first three months.
I use cibc Dividend card not only do I save on gas (.03 off a litre till you get 300l then .10 off one time and then it resets) but earn Cashback everywhere. Last yr I earned about 580 Cashback this yr I’m over 200 right now.
I bank with CIBC as I use my card I pay it off same day so never paid interest.
Note that your max yearly cash back for the 4% (gas and groceries), 2% and 1.5% categories is $800 (4% of $20,000). After $20,000 yearly spend, the 4% cash back ends, and is replaced with 0.5% on all purchases. In other words, if you spend on any of the other categories, you won’t get the $800, because you’ll hit $20,000 total spend before you hit $20,000 on gas and groceries.
I got a Rogers World Elite card, and use it for all purchases except gas and groceries, for 1.5% cash back. I use the cibc dividend card only for gas and groceries for 4% cash back.
CAA members save 3 cents per L at all shell stations. And they use air miles.
4. Drive Sensibly
Quick quickening and short explosions of speed can cost you a ton with regards to gas. Slow and reliable movement is constantly favored over aimless driving. Land Rovers, for example, can show signs of improvement mileage utilizing journey control. Practice smooth driving and you’ll certainly set aside some cash with improved gas mileage.
5. Time Your Trips to the Gas Station
Gas costs can ascend on Thursdays because of high odds of end of the week travel. To keep away from these expanded costs, top off the tank before Thursday or on significant occasions.
6. Utilize Your Smartphone to Find the Cheapest Gas Station
Your cell phone is for something other than perusing Facebook and Instagram. Use it to locate the least expensive gas in your general vicinity. Applications like AAA Triptik and GasBuddy will assist you with finding the closest and least expensive fuel. gas
Something I’ve noticed with the gas saving apps… many times the prices are wrong. I show up at a station, and end up refueling anyway, and then a few minutes later I see it has been put back to the “fake low price”.
I think owners are gaming the system in order to draw people in.
7. Get a Gas Rewards Card
Too few have a gas rewards card. It resembles not getting a prizes plan regardless of whether you’re a long standing customer. There are a great deal of sites out there that can acquaint you with bargains for fuel rewards. You can get free gas on the off chance that you gather enough focuses, so why not? Pursue that prizes card!
8. Try not to Leave Your Engine Idling for Very Long
Close off your motor in case you’re not going anyplace. You’re squandering gas, and you’re dirtying nature.
9. Deliberately Use Cards or Cash
money or credit
A few service stations charge a premium on the off chance that you pay with Visas, however some give you limits on them. Discover and use what you can to set aside cash.
10. Keep up Your Car
Keeping your vehicle kept up is the manner by which to get a good deal on gas over the long haul. In the event that you have a clunker or a vehicle that you treat severely, it will have awful mileage. Simply keeping your tires expanded can improve your gas mileage by 3.3%. So focus on your support.
11. Be Picky
Corner store
Quit heading off to the corner store near your home or the interstate so you can get it over with. This can cost you almost 15 pennies more for every gallon. Discover a corner store that has modest costs and stick with it.
11. Try not to Overload Your Car
over-burden vehicle
This is an easy decision, however it needs strengthening. In case you’re hauling around as long as you can remember in your vehicle, quit doing it. Clearly the heavier your vehicle gets the more gas it will require to cover a similar separation. Just keep the minimum necessities in your vehicle. Leave the rest at home.
This application gets you 40/cents per gallon money back at several gas stations. Average individuals are getting paid hundreds, and expert drivers are getting thousands with this application that gets you 40cents money back on each gallon of gas!”
12. Drive more slowly and think ahead and use motor braking.
The amount of time you win for speeding is so little compared to the amount of fuel you are going to save.
13. Plan out grocery trips for longer times. Instead of going a few times a week to pick up a couple things, go once every 2-3 weeks with a list of everything you’ll need for that timeframe.
14. Drive the smallest stick shift diesel available. Press in your clutch on downhills, especially long ones on the freeway. Play a game where you try to put as little foot on the gas.
15. Buy a more fuel efficient car. That makes the biggest difference.
16. Drive less. Combine trips. Carpool. Walk. Bicycle. Take public transit.
Do things (including many types of work) that can be done over a wire, over that wire, instead of driving to it. Drive a more fuel-efficient vehicle. If people would bother to think about when all of these might be possible, they would find that they generally are possible.
16. Limit discretionary driving.
I have a gas-powered SUV and paid nearly $60 to fill its tank last week. I no longer drive around town just for the hell of it—I have to be strategic. Instead of driving to Target or Walmart for household goods and groceries, I order these necessities for delivery via Amazon. If I do need to drive to one part of town, I hit all the shops in that area at once and act as if I won’t be back for weeks. Ultimately, I am driving with intent—every trip has a purpose.
17. Tyres
Find the Tyre pressure placard in your car and make sure your tyres are pumped up to the correct pressure.
Try and do this when you have driven the car for less than 5 minutes. hot air expands and will give a false reading if the tyres are hot. do it when it is cold. Do NOT pump them up to the max pressure listed on the side of the tyre.
Keeping your tire pressure perfect is not only a safety measure but also helps in Saving Fuel as the right amount of tire pressure will reduce the friction with the road.
Tips- Tire pressure check is free on every petrol pump, but it does not mean it’s useless. Make Use of It every time you can.
Actually, over-inflate your tires for best gas mileage.
The number on your door is the recommended pressure. The max pressure on the tire is the “do not exceed” number. Something in between is fine.
The drawback is that you’re going to wear out the middle of the tire quicker than the sides (because it’ll dome a bit from the higher pressure if you don’t have enough weight to force it flatter again). This might be noticeable after years.
But tires aren’t that expensive, and fuel is. You’ll pay off the small reduction in tire life with the bigger reduction in fuel use (and, especially if you’re in a pinch today, you could kind of consider it a deferred expense). And, it’s a small change you can always taper off again later.
A side effect will be a slightly harsher ride, and slightly less grip (not great for the winter).
Roughly speaking, 50% of your gas usage comes from rolling resistance in the tires, the other 50% from air resistance. At city speeds, tires and starts/stops make up most of your gas cost. Around 2/3, 3/4 of highway speeds is where air resistance takes over. Above 60mph/100kmph is where you really start to gobble fuel disproportionately (10% faster uses 33% more fuel).
Avoid where you have to use the brakes. Any time you use the brakes you’re wasting all the energy you had to put into accelerating the vehicle. In stop/go traffic, this is most of your fuel use. So instead of racing forward to fill gaps and then have to stop, just drive half the speed, steadily. If you see the light is red, get off the gas and coast, don’t accelerate up to it and then hit the gas. Careful you’re not blocking turning lanes by driving slower, just because you’re stopping at the lights doesn’t mean everyone behind you is.
In short… there’s no free lunch here. If there were ways to save money on gas, those would already be things we’re doing. All the little tips and tricks might add up to 20%, which is like… where gas prices were a month ago.
The only easy way to save money on gas is to drive less.
18. Lose weight.
Get rid of any excess stuff you have in your car. Every extra kilo costs money to haul around. Same goes for aerodynamics. those roof racks you never use? take them off!
19. Change your driving style.
So many people these days drive aggressively. stamping your foot to the floor whenever you accelerate is both unnecessary and burns far more fuel than using 50 or 75% throttle. there are other throttle positions than 100%!
Instead of speeding up to close any gap in front of you. leave it there and coast a bit. someone may change lanes, who cares? watch ahead, if cars start braking ahead, take your foot off the throttle early and coast a bit instead of riding the car in front of you constantly braking and accelerating.
20. Drive smoothly. it’s amazing how big of a difference driving style makes to fuel consumption.
21. Engine Air Filter
Make sure the engine air filter is clean, dirty air filters make for poor fuel consumption.
22. Premium Fuels
Only go for premium fuels if the car company suggests you to. Otherwise, you are just increasing the cost of fuel and increasing the overall running cost of your car. Well, it’s a myth that premium fuel will help you save more fuel and increase the mileage of your car It’s False.
Tips- Buy Normal Fuel, Premium fuel burns more and adds more price and Same less Fuel.
23. Cruise Control
Using cruise control on the highway will provide a smooth ride with a little bit of constant acceleration. Ultimately it will add to your mileage and save you a lot of fuel.
24. Race Peddle Control
If you keep a soft foot on the peddle you will always Save lots of Fuel. When we use a hard foot car consumes the maximum amount of fuel that needs to generate the power we want.
Tips – After attaining a speed of 70-80 try losing your foot maintaining the race paddle at the fixed position where the acceleration is almost zero.
25. Keep RPM Low
Higher RPM means higher fuel consumption and Lower RPM helps in Saving Fuel providing a safe feeling to every passenger in the car.
Tips- Remember you can only create a very little difference in time if you drive fast keeping your speed and RPM high. But you can’t save more than 5 Min as per the traffic on the roads these days. Keep it Low to Save Fuel.
26. Save Fuel by Driving Smart
Driving consciously and safely will always help in maintaining the mileage of a car and Save Fuel. Avoiding unnecessary fast pickups and jackrabbit stops will always help in saving fuel.
Tips – Easy and Safe driving will help in Saving Fuel and driving safety.
27. Overlooked button on your car may help save on gas
The ‘Air Recirculating’ button on your A/C might cool off your car faster and save you a little gas. On most cars, trucks, and SUVs the air recirculation button is easily identifiable, with its representing symbol of a half-circle inside of the outline of a vehicle. Many people say they’re aware of the button, but are not sure when it should be on or off.
Another function of this climate control system is to stop pollution and exhaust fumes from entering the vehicle. Having this button activated will also help to greatly reduce pollen when driving, which is a big positive if you suffer from outdoor allergens.
“If you don’t switch the air recirculation button on, then your car’s air conditioning will be constantly cooling warm air from outside your vehicle, and will have to work much harder, putting more stress on the blower and air compressor,” said Ruhl.
Another benefit to using the air recirculation feature is the money you could save on gas.
“Cars are usually more fuel-efficient when the air conditioner is set to recirculate interior air. This is because keeping the same air cool takes less energy than continuously cooling hot air from outside,” said Ruhl.
While the recirculation button is great for the summer months, it may be best to avoid it in the winter or when your windows become foggy.
“Anytime you’re using defrost, it’s best to not have that button on. Also, using it while you have your heater on isn’t going to do anything for you vehicle,” said Ruhl.
28. Your driving habits are a huge factor. Very slow accelerations and decelerations help dramatically. Coasting to that upcoming red light instead of keeping on the gas and braking. Chilling at 60 on cruise in the right lane vs accelerating between 65 and 75 passing people in the left. Things like that.
Also for most cars, above 55 its better to keep your windows up and use ac, below 55 better to do windows down and ac off. Varys by model due to aerodynamics, but 55 is good enough to give you an idea.
29. Don’t hard accelerate
Try to slow down in a more gentle manner if your lucky the light will go green before you stop
Be consistent with your speed if it’s 30 mph zone try not to go faster than that or get distracted to the point where your car starts slowing down
If it’s hot out keep the windows down, AC in older cars can make the car consume more gas, not sure how these newer cars are doing with that.
Make sure your tires have good tread, bald tires can spin out more and if the wear is uneven that can cause additional issues.
30. If you drive a SUV trade it for a Toyota Corolla
Scientifically proven that the wavelength of reflections on the beige tone is in the optimal bandwidth to reduce optical resistance, thus better fuel efficiency.
Check your engine air filter. Make sure it is clean, replace if necessary. Make sure your tires are filled to the recommended pressure.
Also change spark plugs at their recommended service life.
Also, if you car is over 160k km, good idea to replace the O2 sensors as they get slow. Replaced all four sensors in my car and my mileage went from 9.x L/100 km to the high 7’s.
What kind of car should you buy that saves on gas?
A Prius, or any type of gas/electric hybrid, or a smaller vehicle, like a Toyota Corolla, Honda Civic, Chevy Malibu, Ford Focus, VW GTI or Rabbit.
But there is a direct correlation between How you drive, regardless of What you drive. I have a 1998 Chevy Silverado, with a 5.7L (350 cu in) V8, and I can get great MPG’s when I drive it sensibly, and don’t have a ton of unnecessary stuff/gear in the back, or even back seat.
Make sure the tires are set to the appropriate PSI. Always set them to the pressure setting on the inside of the drivers door. On that subject, changing the tire size or wheel size and sidewall thickness will also have a negative effect on MPG.
You would be surprised how much stuff a lot of people have laying in the back of their car, and if they would simply clean it out, they could save money.
Also, keeping your vehicle tuned up and the oil changed per the owners manual will also help keep the MPG high.
Not speeding away from every stop sign or stop light will also help.
Keeping your speed down on the freeway will help.
However, opting to roll the windows down instead of using the A/C to keep cool will actually create drag on the car and lower the efficiency. So crank the heat sucker up to high. Not only with rolling the windows up save fuel, it will also reduce noise and reduce fatigue, so you can drive more comfortably.
What burns more gas, accelerating as fast as possible to 60 mph (e.g. 10 seconds) or accelerating slowly (e.g. 30 seconds)?
Not long ago I had a ’16 Subaru WRX. Fast, turbo-charged all-wheel-drive car. Terrible gas mileage. It’s also heavy, roughly two tons.
One day, I did an experiment on the city streets. Rather than accelerate in a controlled manner and drive at a consistent pace, I put the gas pedal all the way down to reach about 15 mph over the speed limit, and then I put the car in neutral, and let it coast. The car would coast a full mile before it was going slow enough (5 to 10 mph below the speed limit) that I had to put it in gear and goose the throttle again full blast and bring it up to 15 mph over the speed limit.
In this simple test, the overall gas mileage skyrocketed. It went from about 25 mpg to more like 40 mpg. And yet I was ultimately going the speed limit on average, and kicking off my trips very quickly.
This led me to a realization. Yes, holding that gas pedal all the way down uses up a lot of gas. But what it also does is important: it brings you up to speed. What also uses up a lot of gas is simply cruising—not coasting, cruising. That’s where most of your gas is being spent, because your engine is expending gas, quite a bit of it, actually, just to keep up and maintain velocity.
And when you accelerate slowly, you’re effectively cruising, without being up to speed, yet with a little extra gas. That’s wasteful, because you’re going slow and still using up plenty of gas. Is it more wasteful than the explosion of rushing your car forward immediately? Actually, perhaps so, if you’re taking too long to do it.
Remember, just turning that engine using fuel uses up fuel. Accelerating quickly brings the car up to speed quickly—which brings the engine’s productivity to the maximum output quickly—which is not an infinite dump of fuel, it is limited to what the fuel line and injector and cylinder can mix with air and compress, which is measurable, and it’s actually not as far off from cruising fuel as people seem to think. Source: Quora
TIPS ON PUMPING GAS THAT WILL SAVE YOU $$$
1️⃣ Only buy or fill up your car or truck in the early morning when the ground temperature is still cold. Remember that all service stations have their storage tanks buried below ground. The colder the ground the more dense the gasoline, when it gets warmer gasoline expands, so buying in the afternoon or in the evening….your gallon is not exactly a gallon. In the petroleum business, the specific gravity and the temperature of the gasoline, diesel and jet fuel, ethanol and other petroleum products plays an important role.
2️⃣ A 1-degree rise in temperature is a big deal for this business. But the service stations do not have temperature compensation at the pumps.
3️⃣ When you’re filling up do not squeeze the trigger of the nozzle to a fast mode If you look you will see that the trigger has three (3) stages: low, middle, and high. You should be pumping on low mode, thereby minimizing the vapors that are created while you are pumping. All hoses at the pump have a vapor return. If you are pumping on the fast rate, some of the liquid that goes to your tank becomes vapor. Those vapors are being sucked up and back into the underground storage tank so you’re getting less worth for your money.
4️⃣ One of the most important tips is to fill up when your gas tank is HALF FULL. The reason for this is the more gas you have in your tank the less air occupying its empty space. Gasoline evaporates faster than you can imagine. Gasoline storage tanks have an internal floating roof. This roof serves as zero clearance between the gas and the atmosphere, so it minimizes the evaporation. Unlike service stations, here where I work, every truck that we load is temperature compensated so that every gallon is actually the exact amount.
5️⃣ Another reminder, if there is a gasoline truck pumping into the storage tanks when you stop to buy gas, DO NOT fill up; most likely the gasoline is being stirred up as the gas is being delivered, and you might pick up some of the dirt that normally settles on the bottom.
6️⃣ Note: If the pump repeatedly shuts off early, it could be a sign of a problem with the vapor recovery system, such as a clogged carbon canister.”
How can You save gas when driving long distances?
1. First and foremost Maintain a steady speed.
2. Fill your tire pressure 1 or 2 psi more than the prescribed number.
3. Do not travel with your AC off, especially during long distance journey. With your AC off you will have to lower the car windows and if you are traveling at speed more than 60 miles per hour it is going to affect the aerodynamics of the car and this might affect the fuel consumption a bit.
4. Remove all unnecessary weight from the car.
5. Choose a well maintained road even if it is going to take you more time than a bad road.
6. Have your car checked with a mechanic before you travel.
Do automobiles get better fuel mileage with the A.C. on and windows up, or A.C. off, and windows down?
Under 70mph and your windows up, your AC will use more energy than if the windows were down and the AC off. As your cruising speed increases, the aerodynamic drag on the car increases to the point where having the windows down creates a greater load on the engine than the AC does. This only applies to modern cars which are generally quite aerodynamic. Having the windows up or down doesn’t really make any difference to vintage cars. Remember though, AC takes more power than you might suppose so on a long hot journey, driving with the AC off will improve mpg. Taking the AC equipment off altogether will make an even bigger difference – as much as 10%.
Does cruising in a car save on gas? How?
Since cruising involves maintaining the vehicle at a constant velocity, it requires minimum efforts (Power) from the engine.
The power required from the engine is used to nullify the declaration from frictional forces (air drag and road adhesion). Since less power is required from engine the ECU ensures minimum gas is used.
Can lowering your tailgate really save on gas?
No it’s a myth…in fact the now cancelled show MythBuster’s did an episode on it. Pretty legit test if I do say so, although if you have a truck with two gas tanks you could test it yourself as I have. The one thing that can help seems counterintuitive, which is add a little weight. Like around 100 pounds or so depending, and make sure it’s over or behind the rear axle in the bed. What this does is give the rear wheels a bit more traction and that increases your gass mileage a little. A trick I learned from my Grandpa as a curious little kid wondering why he always had a couple spares mounted to each side of the bed right up against the tailgate. Those old gas guzzlers need all the efficiency they could get.
Bonus: also works better in snow, ice, and slush…get some sand bags and throw them in the same spot behind the axle and you limit fishtailing/sliding in the winter. More weight than the hundred pounds, plus it has multiple uses. If you get stuck where the tires are spinning on the ice you can open up a sand bag and out the sand in front and behind the tire to help gain traction. Make sure to do both sides of the truck as you probably won’t have positraction. Lol…additionally if it’s not too cold you can pee on the ice around the tire. I have gotten many a people unstuck with a little sand and piss.
How can I save gas when driving long distances?
1. First and foremost Maintain a steady speed.
2. Fill your tire pressure 1 or 2 psi more than the prescribed number.
3. Do not travel with your AC off, especially during long distance journey. With your AC off you will have to lower the car windows and if you are traveling at speed more than 60 miles per hour it is going to affect the aerodynamics of the car and this might affect the fuel consumption a bit.
4. Remove all unnecessary weight from the car.
5. Choose a well maintained road even if it is going to take you more time than a bad road.
6. Have your car checked with a mechanic before you travel.
Hope these points might help you.
Can I keep driving on eco mode? How much does it save on gas?
Economy mode is useful on most conditions but be advised, that some engines need to be “ blown free” by using higher rpm snd full engine load in order to keep the exhaust/ turbo- system declogged. That applies especially to diesel- engines with egr- system. In “ grandfather”— drive mode only those will have need for extended overhaul way before resching estimated end of service- time. ( what absolutely nullifies all eventual gains from eco- mode
What are some ways to save on gas annually?
To save gas you should follow the instructions of the manufacturer of your car if your question refers to the gasoline that you spend to make your car run. If your question refers to the natural gas that you use at home to heat up food, water etc then the only recommendation is to watch for any leaks if you suspect that you are losing gas. Fixing those leaks by means of an experienced technician will resolve your problem. Coming back to your car, not over speeding, and not letting the engine on idle for long time in order to keep the air conditioner working or the heater in the Winter these are two important ways to reduce gasoline consumption.
Does getting a Tesla make financial sense in terms of cost savings on gas and maintenance?
With rising prices, what are smart ways to save money or good alternatives like horse and carriage to save on gas?
This is my plan for tackling the current inflationary environment in the United States:
- Limit discretionary driving. I have a gas-powered SUV and paid nearly $60 to fill its tank last week. I no longer drive around town just for the hell of it—I have to be strategic. Instead of driving to Target or Walmart for household goods and groceries, I order these necessities for delivery via Amazon. If I do need to drive to one part of town, I hit all the shops in that area at once and act as if I won’t be back for weeks. Ultimately, I am driving with intent—every trip has a purpose.
- Meal substitution. In my area of the U.S., beef is less expensive than chicken. Thus, I substitute beef for chicken and prepare meals like spaghetti, burgers, and chili. Also, my cost of groceries has risen faster than the cost of a Chipotle burrito, for instance, so I sometimes eat a Chipotle burrito instead of eating at home.
- Plan for higher utilities. My energy bill is much higher today than it was last year. Since I live in an apartment, each unit’s bill is decided by dividing the energy cost for the entire building by the number of occupied units. Thus, I have very little control over the cost of my monthly bill. I must prepare for this expense and not let it blindside me.
- Limit unnecessary consumption. Now is not the time to be frivolous with money. All nonessential consumption (i.e., online shoe shopping, going to the movies, etc.) is essentially placed on hold.
- Invest tactfully. With inflation running hot, the Federal Reserve likely hiking interest rates in the coming months, and macroeconomic and political uncertainty, the stock and crypto markets may fall further before rising once again. Having dry powder (i.e., cash) on hand to take advantage of the situation is not a bad idea. I’ve been building my cash position over the past couple of months, so I can buy assets when others are fearful and need/decide to sell. As a long-term investor, you want to buy into fear and weakness, and I believe we are in that environment.
How much money do you save on gas with a hybrid?
If you compare a small, light ICE vehicle, you won’t save anything but if you compare an ICE car of the same weight as an EV then you will save money, possibly as much as $10 every 200 miles.
How much money do you save on gas by paying cash instead of credit in the long-term?
Using a 10 cent per gal difference between cash & cc, that comes to about $28 extra per year to use my credit card for my mileage and average MPG. That’s about $2.33/month so not much at all. Then you need to take into account that I get 3% back using my credit card at the pump from my credit card rewards program. That comes to $29/year. Those were round number calculations I did though so we’ll just call it even.
Does cruise control actually save gas or is that a myth?
The cruise control itself does not save any gas compared to simply keeping your foot at the same position. However, what cruise control does tend to do, is influence the driving style of the human inside.
The whole point of the cruise control is that you don’t need to constantly control the throttle. And thus you will tend to want to avoid needing to do that while using it. At the most, you will want to disengage the cruise control, to reduce speed slowly when needed, and then re-engage when you can overtake.
The result is that you tend to start looking further ahead, a few cars further than the one directly in front of you. Coming up on a car, you will decide earlier if you can overtake, or if you lift the throttle. This is very positive for reducing fuel consumption.
Many drivers without cruise control will not lift until the last moment, and then often need to brake when they can’t overtake. This is disastrous for the fuel consumption.
There are some special situations where cruise control itself can help reducing fuel consumption. One of those is when using the highest gear at very low throttle. This tends to be the most fuel-efficient configuration, but with so little torque, it can be difficult to keep the speed constant. The cruise control can do that very well. If you can’t manage to drive comfortably at that speed yourself, but the cruise control can, then that is a case where the cruise control directly allows higher fuel efficiency.
Another is when your car doesn’t have a mid-console near your foot, and thus is it difficult to lean your foot against it, helping keep a steady position. In that case, driving without cruise control might lead to constant speed changes as well, and the cruise control could help smooth that. That will also improve fuel efficiency slightly.
But in general, anything the cruise control does, you can do as well… It’s is the driving style that improves fuel efficiency. Cruise control can stimulate a more relax driving style, and that helps. If you already were driving relaxed and smooth, then you’ll not notice any difference.
By improving public roads in order to minimize rolling resistance and enhance traction, how much money could be saved on gas consumption and avoidance of traffic accidents?
If I drove 100 miles every day, how long would it take me to pay off my electric car with the money I save on gas?
What kind of car should I buy that saves on gas?
What’s the best car that will save on gas/maintain car value overtime?
Short answer: Toyota corolla or Honda civic
But there is a direct correlation between How you drive, regardless of What you drive. I have a 1998 Chevy Silverado, with a 5.7L (350 cu in) V8, and I can get great MPG’s when I drive it sensibly, and don’t have a ton of unnecessary stuff/gear in the back, or even back seat.
Make sure the tires are set to the appropriate PSI. Always set them to the pressure setting on the inside of the drivers door. On that subject, changing the tire size or wheel size and sidewall thickness will also have a negative effect on MPG.
You would be surprised how much stuff a lot of people have laying in the back of their car, and if they would simply clean it out, they could save money.
Also, keeping your vehicle tuned up and the oil changed per the owners manual will also help keep the MPG high.
Not speeding away from every stop sign or stop light will also help.
Keeping your speed down on the freeway will help.
However, opting to roll the windows down instead of using the A/C to keep cool will actually create drag on the car and lower the efficiency. So crank the heat sucker up to high. Not only with rolling the windows up save fuel, it will also reduce noise and reduce fatigue, so you can drive more comfortably.
When I have little gas left in my car, is it better to drive fast or slow so that I can get the best distance out of the amount of gas left?
Look at all the other mileage techniques that other people have formulated over the years, they all apply. Basically:
- Accelerate firmly from a stop. Too slowly, and you waste time in low gears, which are inefficient. Too fast, your engine is burning more fuel than it needs to. 8 – 10 seconds to 40mph is good, get a feel for your car, maybe get a OBD sensor to monitor fuel usage directly (any car after 1990s has one, I think)
- Try to get to the top gear, and at lowest RPM. Engine spins the slowest for maximum distance. A little slower is usually ok, especially if the car has bad drag coefficients, or there’s a lot of stops. Accelerating to top gear only to brake for a stop light is a waste of fuel.
- Modern cars cut fuel when engine braking. Try to roll as far/long as possible without using the brakes and avoid idling. Braking early, then rolling is better than coming to a complete stop since idling is just a constant drain, and if the light goes green, you save kinetic energy. You can usually feel when the ECU starts fuel delivery again when the engine braking lessens, though forcing downshifts is not recommended due to
- Increased wear on a transmission which is more expensive than brake replacement
- the spurt of fuel needed to kick the RPMs up. Though it may be needed if you need every last drop. Try downshifting early, if needed.
Try not to use neutral when coasting since the engine is still running. Also, its generally illegal
4. coast up hill, accelerate downhill (where possible). Don’t roll down the hill backwards.
5. If in a Hybrid, try to coast at 0 throttle and 0 regen. Regen, while nice, is fundamentally inefficient due to multiple transformations of energy. At 0 throttle, the engine is off, and no fuel is used. Hybrids generally have low drag, so can go pretty far on flat ground.
6. Tailgating can save some fuel, but it isn’t really safe. A few car lengths of distance can still yield a bit, though don’t overspeed to do so.
7. Turn engine off if you’re gonna be stopped for long periods of time.
Is driving slow up on a hill(consume less fuel but takes longer) or fast(consume more fuel but takes less time) better choice for fuel saving ? The hill would be 1 km for reference.
The answer is matching the proper rev range to power to be most efficient.
The real world answer is that if it’s just a kilometer the difference is negligible
Engines are most efficient usually somewhere at the 1/3 to half of the RPM range and at decent load. So if you need to floor it to get on the hill on current gear, downshift, else just press pedal slightly stronger and keep the speed.
As long as you can engine brake downhill the speed doesn’t really matter, just keep the usual traffic speed.
In general accelerating just to slow down later is worse than just keeping steady pace, especially if there are brakes involved.
When accelerating in a car does it use more gasoline to accelerate rapidly as opposed to slowly?
That’s a good question, but not a simple one to answer.
A car is most efficient when in its highest gear. If you accelerate too slowly, you will spend too much time in the lower gears before you get into the highest gear. Therefore, accelerating excessively slowly is not the most economical technique. Thus, advise to accelerate slowly to save fuel is WRONG!
A few decades ago, BMW did some tests to determine the most economical way to drive their cars. Although that was before fuel injection became common, I’m sure that the rules have not changed very much. They found that for their cars, the most economical technique was to accelerate with a heavy foot (2/3 to 3/4 throttle) but upshift at only 2000 rpm. That works well for a manual transmission, but is generally impossible with an automatic transmission because it will upshift at a considerably higher speed if you use a heavy foot and, just as bad, delay locking the torque converter. So, with an automatic transmission, the most economical technique is probably to accelerate at a moderate rate, i.e., not too fast and not too slowly.
The rules may have changed slightly because of modern electronic fuel injection systems which control the fuel mixture better. They are less likely to deliver an excessively rich mixture at wide throttle openings which occur with a very heavy foot.
With an Otto-cycle engine (4-stroke, spark ignition), the throttle valve is an important source of inefficiency. The power required to suck in air against the vacuum created by the throttle valve wastes fuel. For that reason, an Otto-cycle engine is most efficient when the throttle valve is wide open, or nearly so, provided that the fuel system does not provide an excessively riche mixture under those conditions. That’s why it is most efficient to use a heavy foot and upshift at low speeds, but not at such low speeds that the engine knocks or doesn’t run smoothly since that could cause damage.
The most inefficient thing you can do is use a lower gear than necessary for the power you are using. So, if you delay upshifting until 3000 rpm when, with a heavier foot you could get the same power at 2000 rpm, you are wasting fuel. So, for fuel efficiency, you should upshift at the lowest possible speed that will provide the power you need, but not at such a low speed that the that the engine protests.
In a vehicle with an automatic transmission What burns more gas, accelerating as fast as possible to 60 mph or accelerating slowly?
In simplistic physics terms, it makes no difference. You create the same amount of kinetic energy either way – and theoretically, that means you must burn the same amount of fuel.
For an internal combustion engine with gears it gets complicated.
A conventional car engine has a range of RPM’s at which the engine operates most efficiently. At lower or higher RPM’s gas consumption is worse.
So the trick is to keep the car in that band.
With a manual gearbox – the best approach is to push hard on the pedal to get the RPM’s into the efficient range – then accelerate more smoothly to the top of that range – then downshift.
If your car has enough gears, you can arrange to stay in the efficient range for all but the initial acceleration in 1st gear.
However, with an automatic (and especially automatics with not many gears in their gearbox) – you have no direct control over that – so it becomes a matter of tricking the gearbox into doing what you want. With modern gearboxes, you’d hope that the manufacturer set the shift points for efficiency – but it depends on the car. For a sports car they probably optimized the shift pattern for best 0–60 time – so they’d keep the engine in the “power zone” of RPM’s rather than in the “efficiency zone”…for a family sedan, the reverse would be the case. Many cars have a “sport” button which essentially lets you choose between keeping the engine in the power band or the efficiency band.
But even on the “economy” setting, the software won’t be able to prevent you from demanding performance that drives it out of the economy range.
It also varies depending on the air temperature – when the air is cold, it’s more dense and the fuel management software can burn fuel in larger quantities than on hot days – and that may influence the decision.
There are other considerations too. If you accelerate and brake gently then it takes longer to get you where you’re going. This means that the air conditioner, radio, lights, computer(s), etc are running for longer…and that takes energy too.
On the other hand – if you continually red-line the engine, it’ll wear out faster and a worn out engine uses more gas than a good engine.
Honestly – the answer is horribly complicated – and it varies from car to car.
Sources:
1- Quora
2- Reddit
3- https://vehiclecare.in/blaze/how-to-save-fuel-13-fuel-saving-tips/
A Twitter List by enoumen
What is the tech stack behind Google Search Engine?
The original Google algorithm was called PageRank, named after inventor Larry Page (though, fittingly, the algorithm does rank web pages).
After 17 years of work by many software engineers, researchers, and statisticians, Google search uses algorithms upon algorithms upon algorithms.
- The various components used by Google Search are all proprietary, but most of the code is written in C++.
- Google Search has a number of technical explications on how search works and this is also the limit as to what can be shared publicly.
- https://abseil.io and GogleTest https://google.github.io/googletest/ are the main open source Google C++ libraries, those are extensively used for Search.
- https://bazel.build is an other open source framework which is heavily used all across Google including for Search.
- Google has general information on you, the kinds of things you might like, the sites you frequent, etc. When it fetches search results, they get ranked, and this personal info is used to adjust the rankings, resulting in different search results for each user.
How does Google’s indexing algorithm (so it can do things like fuzzy string matching) technically structure its index?
- There is no single technique that works.
- At a basic level, all search engines have something like an inverted index, so you can look up words and associated documents. There may also be a forward index.
- One way of constructing such an index is by stemming words. Stemming is done with an algorithm than boils down words to their basic root. The most famous stemming algorithm is the Porter stemmer.
- However, there are other approaches. One is to build n-grams, sequences of n letters, so that you can do partial matching. You often would choose multiple n’s, and thus have multiple indexes, since some n-letter combinations are common (e.g., “th”) for small n’s, but larger values of n undermine the intent.
- don’t know that we can say “nothing absolute is known”. Look at misspellings. Google can resolve a lot of them. This isn’t surprising; we’ve had spellcheckers for at least 40 years. However, the less common a misspelling, the harder it is for Google to catch.
- One cool thing about Google is that they have been studying and collecting data on searches for more than 20 years. I don’t mean that they have been studying searching or search engines (although they have been), but that they have been studying how people search. They process several billion search queries each day. They have developed models of what people really want, which often isn’t what they say they want. That’s why they track every click you make on search results… well, that and the fact that they want to build effective models for ad placement.
-
Each year, Google changes its search algorithm around 500–600 times. While most of these changes are minor, Google occasionally rolls out a “major” algorithmic update (such as Google Panda and Google Penguin) that affects search results in significant ways.
For search marketers, knowing the dates of these Google updates can help explain changes in rankings and organic website traffic and ultimately improve search engine optimization. Below, we’ve listed the major algorithmic changes that have had the biggest impact on search.
-
Originally, Google’s indexing algorithm was fairly simple.
It took a starting page and added all the unique (if the word occurred more than once on the page, it was only counted once) words on the page to the index or incremented the index count if it was already in the index.
The page was indexed by the number of references the algorithm found to the specific page. So each time the system found a link to the page on a newly discovered page, the page count was incremented.
When you did a search, the system would identify all the pages with those words on it and show you the ones that had the most links to them.
As people searched and visited pages from the search results, Google would also track the pages that people would click to from the search page. Those that people clicked would also be identified as a better quality match for that set of search terms. If the person quickly came back to the search page and clicked another link, the match quality would be reduced.
Now, Google is using natural language processing, a method of trying to guess what the user really wants. From that it it finds similar words that might give a better set of results based on searches done by millions of other people like you. It might assume that you really meant this other word instead of the word you used in your search terms. It might just give you matches in the list with those other words as well as the words you provided.
It really all boils down to the fact that Google has been monitoring a lot of people doing searches for a very long time. It has a huge list of websites and search terms that have done the job for a lot of people.
There are a lot of proprietary algorithms, but the real magic is that they’ve been watching you and everyone else for a very long time.
What programming language powers Google’s search engine core?
C++, mostly. There are little bits in other languages, but the core of both the indexing system and the serving system is C++.
How does Google handle the technical aspect of fuzzy matching? How is the index implemented for that?
- With n-grams and word stemming. And correcting bad written words. N-grams for partial matching anything.
Use a ping service. Ping services can speed up your indexing process.
- Search Google for “pingmylinks”
- Click on the “add url” in the upper left corner.
- Submit your website and make sure to use all the submission tools and your site should be indexed within hours.
Our ranking algorithm simply doesn’t rank google.com highly for the query “search engine.” There is not a single, simple reason why this is the case. If I had to guess, I would say that people who type “search engine” into Google are usually looking for general information about search engines or about alternative search engines, and neither query is well-answered by listing google.com.
To be clear, we have never manually altered the search results for this (or any other) specific query.
When I tried the query “search engine” on Bing, the results were similar; bing.com was #5 and google.com was #6.
What is the search algorithm used by the Google search engine? What is its complexity?
The basic idea is using an inverted index. This means for each word keeping a list of documents on the web that contain it.
Responding to a query corresponds to retrieval of the matching documents (This is basically done by intersecting the lists for the corresponding query words), processing the documents (extracting quality signals corresponding to the doc, query pair), ranking the documents (using document quality signals like Page Rank and query signals and query/doc signals) then returning the top 10 documents.
Here are some tricks for doing the retrieval part efficiently:
– distribute the whole thing over thousands and thousands of machines
– do it in memory
– caching
– looking first at the query word with the shortest document list
– keeping the documents in the list in reverse PageRank order so that we can stop early once we find enough good quality matches
– keep lists for pairs of words that occur frequently together
– shard by document id, this way the load is somewhat evenly distributed and the intersection is done in parallel
– compress messages that are sent across the network
etc
Jeff Dean in this great talk explains quite a few bits of the internal Google infrastructure. He mentions a few of the previous ideas in the talk.
He goes through the evolution of the Google Search Serving Design and through MapReduce while giving general advice about building large scale systems.
As for complexity, it’s pretty hard to analyze because of all the moving parts, but Jeff mentions that the the latency per query is about 0.2 s and that each query touches on average 1000 computers.
Is Google’s LaMDA conscious? A philosopher’s view (theconversation.com)
LaMDA is Google’s latest artificial intelligence (AI) chatbot. Blake Lemoine, a Google AI engineer, has claimed it is sentient. He’s been put on leave after publishing his conversations with LaMDA.
If Lemoine’s claims are true, it would be a milestone in the history of humankind and technological development.
Google strongly denies LaMDA has any sentient capacity.
Fun facts about Google Search Engine Competitors
Data Source: statcounterGS
Tools Used: Excel & PowerPoint
Edit: Note that the data for Baidu/China is likely higher. How statcounterGS collects the data might understate # users from China.
Baidu is popular in China, Yandex is popular in Russia.
Yandex is great for reverse image searches, google just can’t compete with yandex in that category.
Normal Google reverse search is a joke (except for finding a bigger version of a pic, it’s good for that), but Google Lens can be as good or sometimes better at finding similar images or locations than Yandex depending on the image type. Always good to try both, and also Bing can be decent sometimes.
Bing has been profitable since 2015 even with less than 3% of the market share. So just imagine how much money Google is taking in.
Firstly: Yahoo, DuckDuckGo, Ecosia, etc. all use Bing to get their search results. Which means Bing’s usage is more than the 3% indicated.
Secondly: This graph shows overall market share (phones and PCs). But, search engines make most of their money on desktop searches due to more screen space for ads. And Bing’s market share on desktop is WAY bigger, its market share on phones is ~0%. It’s American desktop market share is 10-15%. That is where the money is.
What you are saying is in fact true though. We make trillions of web searches – which means even three percent market-share equals billions of hits and a ton of money.
I like duck duck go. And they have good privacy features. I just wish their maps were better because if I’m searching a local restaurant nothing is easier than google to transition from the search to the map to the webpage for the company. But for informative searches I think it gives a more objective, less curated return.
Use Ecosia and profits go to reforestation efforts!
Turns out people don’t care about their privacy, especially if it gets them results.
I recently switched to using brave browser and duck duck go and I basically can’t tell the difference in using Google and chrome.
The only times I’ve needed to use Google are for really specific searches where duck duck go doesn’t always seem to give the expected results. But for daily browsing it’s absolutely fine and far far better for privacy.
- I somehow have two Google Drives that don't synch and I don't know how to fix it.by /u/Missmoneysterling (Google) on June 28, 2022 at 9:46 pm
In gmail, if I click the google drive app it shows a set of files and folders. Then I have the Google Drive app on my computer and it has different files and folders. This is Google Drive (G:). My other desktop already synchs with this, so I know it can be done. I got a new laptop that I want to synch with the desktop app which is where all of my important work is, but it will only synch with the app in my gmail. I don't even get what's going on. submitted by /u/Missmoneysterling [link] [comments]
- Lava Womanby Zach Thom (Google Search on Medium) on June 28, 2022 at 7:02 pm
Chapter 1Continue reading on Medium »
- Linkby /u/Witty_Statistician62 (Google) on June 28, 2022 at 4:57 pm
submitted by /u/Witty_Statistician62 [link] [comments]
- Hello, do you guys get this issue ? I don't know why my Play Points get locked.by /u/FieldLegal5468 (Google) on June 28, 2022 at 3:04 pm
submitted by /u/FieldLegal5468 [link] [comments]
- Google Chrome and Mozilla Firefox upgrades to 100th versions soonby /u/CardiologistNo563 (Google) on June 28, 2022 at 8:08 am
submitted by /u/CardiologistNo563 [link] [comments]
- Customer Onboarding Checklistsby /u/tarunsharma786 (Google) on June 28, 2022 at 4:48 am
submitted by /u/tarunsharma786 [link] [comments]
- Cuando buscas chacal aparece hola noriaby /u/Gabetooo (Google) on June 27, 2022 at 11:47 pm
Estaba hablando con unos amigos en Discord, cuando de repente una amiga dijo chacales en lugar de chabales. Al leer eso lo busque porque me sonaba familiar la palabra, y cuando presione enter me percate de algo sorprendente. Que me dejaria dias sin saber porque, ni de donde se origino. Cuando buscas Chacal aparece hola noria!!! Resultado de la busqueda de \"chacal\" Entonces queria preguntar si alguien conoce el origen o si podian investigar, porque yo intente pero no obtuve resultado. Gracias! submitted by /u/Gabetooo [link] [comments]
- #140 Jonathan Millsby /u/Wheelkickmma (Google) on June 27, 2022 at 10:57 pm
submitted by /u/Wheelkickmma [link] [comments]
- Google to sunset Hangouts in Novemberby /u/faithfullpen (Google) on June 27, 2022 at 10:28 pm
submitted by /u/faithfullpen [link] [comments]
- Gmail locked outby /u/Angryadventures75 (Google) on June 27, 2022 at 10:05 pm
I don't know if this is the right place for this but, my phone broke recently and my Google is linked to that phone I don't know if there is a way to get my email and back and use it on my new phone submitted by /u/Angryadventures75 [link] [comments]
- Good quality music | lofi/hip-hop | royalty freeby /u/No-Jackfruit-2233 (Google) on June 27, 2022 at 9:47 pm
Fl4wlessBeatz offers a variety of good quality and royalty free songs perfect for backing tracks and even late night study sessions. Every new viewer will not regret it submitted by /u/No-Jackfruit-2233 [link] [comments]
- Do any of you people on Reddit play raid if so what’s the best website for everything about champions mastery’s, gear etc.by /u/xXSlayerboiiXx (Google) on June 27, 2022 at 7:44 pm
submitted by /u/xXSlayerboiiXx [link] [comments]
- I don't Understandby /u/KamikazeCoPilot (Google) on June 27, 2022 at 7:16 pm
I saw the news about Billie Joe stating that he was going to renounce his citizenship. I had a tickle in my brain, "didn't all of Green Day do that many years ago?" So I searched with results to be filtered for a long time ago, thinking I wouldn't find anything. Instead, I find that "jimmyfailla" posted in 2005 that Billie Joe already renounced his citizenship...in 2005...about Roe v Wade... When you click the link, it takes you to something posted (IIRC) 22 hours ago. This stinks of Google search query is manipulating outputs. Search \"green day renounces citizenship\" for yourself using the same search parameters. Please post your results. I am genuinely curious. submitted by /u/KamikazeCoPilot [link] [comments]
- Bollywood News Live Og24cby /u/og24c (Google) on June 27, 2022 at 6:26 pm
submitted by /u/og24c [link] [comments]
- Regular Expression to Find Duplicate Words in Docsby /u/Vak88 (Google) on June 27, 2022 at 5:59 pm
While writing documents in google I sometimes find myself writing duplicate words without noticing. But you can actually find all duplicate words in a docs document. press ctrl + f to start searching Click on the 3 dots and check the option: Match using regular expressions Type the following into the find box: (\w+)\s+\1 (with the spaces before and after) This helped me during my thesis so hopefully, it will also help somebody else! submitted by /u/Vak88 [link] [comments]
- Google account doesn't recognize devicesby /u/AdemSof (Google) on June 27, 2022 at 5:53 pm
Google has no idea which device I'm using. I've removed all devices I don't use months ago and it still thinks I'm using one of those 2 submitted by /u/AdemSof [link] [comments]
- Because you can feel Google’s AIby Jorge Cimarrón (Google Search on Medium) on June 27, 2022 at 4:12 pm
¿Puede “Pensar” y “Sentir” Emociones una Máquina?Continue reading on Medium »
- Technical solutions specialist / engineer.by /u/digitalplants (Google) on June 27, 2022 at 3:15 pm
Hi guys. I was recently contacted by Google recruiter, and we discussed some roles available in my area. It seems like I could be a good fit for the technical solutions specialist im Google Cloud. As I was informed, it is mostly support / consultancy role which does not really require much coding. My question is: do any of you knows, wheter there is also a coding/algorythmic part of interview for the non-engineering role? Thank you im advance for sharing your experiences! Best Regards submitted by /u/digitalplants [link] [comments]
- Setup Your SEO Toolboxby Jamie Wen (Google Search on Medium) on June 27, 2022 at 12:13 pm
A list of SEO tools for web developers that will help you work more effectivelyContinue reading on Medium »
- google hum to search doesn't work on ios.by /u/magaloopaloopo (Google) on June 27, 2022 at 8:28 am
ive tried every method, opening google assistant and google then force-closing both then opening google again. it didn't work. all apps are up to date but idk why it just doesn't work on ios but works on my android phone submitted by /u/magaloopaloopo [link] [comments]
- Google Walletby /u/Lorenzo_1723 (Google) on June 27, 2022 at 6:58 am
Is there any news about Google Wallet? At Google I/O we got: "rolling out globally in the next few weeks". I mean, it's been something like 50 days and we haven't heard anything about it. Does anyone know something about its release? I'm really looking forward to it. submitted by /u/Lorenzo_1723 [link] [comments]
- Shot Blasting Solutions provider Indiaby /u/Shrinath-Technicals (Google) on June 27, 2022 at 6:25 am
We are a Shot Blasting Solutions provider in India and looking for more opportunities. How we can explore more opportunities worldwide. Looking for companies that can provide us with work as well as establish a partnership. Check our service on the website https://www.shrinathtechnicals.com/services/ submitted by /u/Shrinath-Technicals [link] [comments]
- Five steps to mastering keyword researchby Digital First Aid (Google Search on Medium) on June 27, 2022 at 1:44 am
Keyword research is the lifeblood of your business. Find out how to master SEO, Google rankings and customer search terms. Read on.Continue reading on Medium »
- Google clarifies it uses only first 15 MB of webpage for Search rankingsby /u/Metanism_ (Google) on June 27, 2022 at 12:54 am
submitted by /u/Metanism_ [link] [comments]
- Is this email trustworthy?? Help please!by /u/xNuclear1234 (Google) on June 26, 2022 at 5:01 pm
So recently I was contacting google support through email and I'm not sure if this email address is them since it confuses me that they replied from India even though my devices or account region is in India. This is the email address that contacted me: google-p2bmediation@google.com submitted by /u/xNuclear1234 [link] [comments]
- Google pay issueby /u/todderz57 (Google) on June 26, 2022 at 3:15 pm
Google pay says my phone may be rooted or altered in some way. I was working before but now doesn't submitted by /u/todderz57 [link] [comments]
- Organik Trafiğinizi Nitelikli Potansiyel Müşterilere Dönüştürmek İçin 12 İpucu — Dijital Pazarlama…by Mert Erkal (Google Search on Medium) on June 26, 2022 at 10:35 am
Organik Trafiğinizi Nitelikli Potansiyel Müşterilere Dönüştürmek İçin 12 İpucuContinue reading on Medium »
- Get more accurate search result on Google Searchby Liang Jie Ng (Google Search on Medium) on June 26, 2022 at 3:26 am
Tips to improve your search on Google Search. With this 3 tips, they can save you a lot of time on going page by page to find your answer.Continue reading on Medium »
- Google Pixel 7 roll outby /u/Helen_Magnus_ (Google) on June 26, 2022 at 1:40 am
What's the likelihood in the next Google roll out they're going to release a phone smaller than the Google Pixel 6? I have a google pixel 3 and I'd love to buy another google phone. But I'm not willing to go to a large size phone submitted by /u/Helen_Magnus_ [link] [comments]
- soo my school deleted the email acc they gave me, but these two stupid notifications won't go, I alrdy tried restarting and obv swiping, pls helpby /u/TaNaYsHaH9737 (Google) on June 25, 2022 at 7:47 pm
submitted by /u/TaNaYsHaH9737 [link] [comments]
- Warning! Don't upgrade your nest account. You will lose features!by /u/jpro1001 (Google) on June 25, 2022 at 12:20 am
I had a nest account for the first google doorbell. I wanted to share doorbell access with a relative that has a Google home account of their own. To do that I had to upgrade my nest account to a Google account. After I did that I don't have constant video recording anymore. I have their new 'event recording'. This doesn't record constantly so if something moves I might not see what precluded that or happened after. This is one of the features why I bought the nest doorbell in the first place. Google has a nest aware plus plan but it's $12 which is more than double of the $5 I was previously paying with my nest account. The upgrade process didn't make it clear that any of this would happen. I called Google support, but they said sorry there's nothing we can do for you. I at least assumed that they would give me a year of the plus plan or something as a consolation but nothing. I love Google products and I'm writing this on a pixel and have Google homes all over the house, but this has made me not trust them. I'm really upset and not recommending Google products to people anymore. submitted by /u/jpro1001 [link] [comments]
- How to Utilize Google Search Console Effectivelyby Instiqa (Google Search on Medium) on June 23, 2022 at 5:51 am
Google Search Console delivers information to monitor website performance in search and enhance search rankings. It is therefore essential…Continue reading on Medium »
- How To Rank Higher On Googleby Zero8 Studios | SEO, Web Design and Marketing (Google Search on Medium) on June 22, 2022 at 8:06 pm
Do you find your website being ranked lower and lower despite your best efforts to improve your google ranking? There is nothing more…Continue reading on Medium »
- Leading SEO Service Company in Indonesiaby jane viera (Google Search on Medium) on June 21, 2022 at 7:53 am
If you are the person who runs the website and want many people to know about your website, you can use SEO service as your digital…Continue reading on Medium »
Programming, Coding and Algorithms Questions and Answers
This blog is an aggregate of clever questions and answers about Programming, Coding, and Algorithms. This is a safe place for programmers who are interested in optimizing their code, learning to code for the first time, or just want to be surrounded by the coding environment.
I think, the most common mistakes I witnessed or made myself when learning is:
1: Trying to memorize every language construction. Do not rely on your memory, use stackoverflow.
2: Spend a lot of time solving an issue yourself, before you google it. Just about every issue you can stumble upon, is in 99.99% cases already has been solved by someone else. Learn to properly search for solutions first.
3: Spending a couple of days on a task and realizing it was not worth it. If the time you spend on a single problem is more than halve an hour then you probably doing it wrong, search for alternatives.
4: Writing code from a scratch. Do not reinvent a bicycle, if you need to write a blog, just search a demo application in a language and a framework you chose, and build your logic on top of it. Need some other feature? Search another demo incorporating this feature, and use its code.
In programming you need to be smart, prioritize your time wisely. Diving in a deep loopholes will not earn you good money.
List of Freely available programming books – What is the single most influential book every Programmers should read
- Bjarne Stroustrup – The C++ Programming Language
- Brian W. Kernighan, Rob Pike – The Practice of Programming
- Donald Knuth – The Art of Computer Programming
- Ellen Ullman – Close to the Machine
- Ellis Horowitz – Fundamentals of Computer Algorithms
- Eric Raymond – The Art of Unix Programming
- Gerald M. Weinberg – The Psychology of Computer Programming
- James Gosling – The Java Programming Language
- Joel Spolsky – The Best Software Writing I
- Keith Curtis – After the Software Wars
- Richard M. Stallman – Free Software, Free Society
- Richard P. Gabriel – Patterns of Software
- Richard P. Gabriel – Innovation Happens Elsewhere
- Code Complete (2nd edition) by Steve McConnell
- The Pragmatic Programmer
- Structure and Interpretation of Computer Programs
- The C Programming Language by Kernighan and Ritchie
- Introduction to Algorithms by Cormen, Leiserson, Rivest & Stein
- Design Patterns by the Gang of Four
- Refactoring: Improving the Design of Existing Code
- The Mythical Man Month
- The Art of Computer Programming by Donald Knuth
- Compilers: Principles, Techniques and Tools by Alfred V. Aho, Ravi Sethi and Jeffrey D. Ullman
- Gödel, Escher, Bach by Douglas Hofstadter
- Clean Code: A Handbook of Agile Software Craftsmanship by Robert C. Martin
- Effective C++
- More Effective C++
- CODE by Charles Petzold
- Programming Pearls by Jon Bentley
- Working Effectively with Legacy Code by Michael C. Feathers
- Peopleware by Demarco and Lister
- Coders at Work by Peter Seibel
- Surely You’re Joking, Mr. Feynman!
- Effective Java 2nd edition
- Patterns of Enterprise Application Architecture by Martin Fowler
- The Little Schemer
- The Seasoned Schemer
- Why’s (Poignant) Guide to Ruby
- The Inmates Are Running The Asylum: Why High Tech Products Drive Us Crazy and How to Restore the Sanity
- The Art of Unix Programming
- Test-Driven Development: By Example by Kent Beck
- Practices of an Agile Developer
- Don’t Make Me Think
- Agile Software Development, Principles, Patterns, and Practices by Robert C. Martin
- Domain Driven Designs by Eric Evans
- The Design of Everyday Things by Donald Norman
- Modern C++ Design by Andrei Alexandrescu
- Best Software Writing I by Joel Spolsky
- The Practice of Programming by Kernighan and Pike
- Pragmatic Thinking and Learning: Refactor Your Wetware by Andy Hunt
- Software Estimation: Demystifying the Black Art by Steve McConnel
- The Passionate Programmer (My Job Went To India) by Chad Fowler
- Hackers: Heroes of the Computer Revolution
- Algorithms + Data Structures = Programs
- Writing Solid Code
- JavaScript – The Good Parts
- Getting Real by 37 Signals
- Foundations of Programming by Karl Seguin
- Computer Graphics: Principles and Practice in C (2nd Edition)
- Thinking in Java by Bruce Eckel
- The Elements of Computing Systems
- Refactoring to Patterns by Joshua Kerievsky
- Modern Operating Systems by Andrew S. Tanenbaum
- The Annotated Turing
- Things That Make Us Smart by Donald Norman
- The Timeless Way of Building by Christopher Alexander
- The Deadline: A Novel About Project Management by Tom DeMarco
- The C++ Programming Language (3rd edition) by Stroustrup
- Patterns of Enterprise Application Architecture
- Computer Systems – A Programmer’s Perspective
- Agile Principles, Patterns, and Practices in C# by Robert C. Martin
- Growing Object-Oriented Software, Guided by Tests
- Framework Design Guidelines by Brad Abrams
- Object Thinking by Dr. David West
- Advanced Programming in the UNIX Environment by W. Richard Stevens
- Hackers and Painters: Big Ideas from the Computer Age
- The Soul of a New Machine by Tracy Kidder
- CLR via C# by Jeffrey Richter
- The Timeless Way of Building by Christopher Alexander
- Design Patterns in C# by Steve Metsker
- Alice in Wonderland by Lewis Carol
- Zen and the Art of Motorcycle Maintenance by Robert M. Pirsig
- About Face – The Essentials of Interaction Design
- Here Comes Everybody: The Power of Organizing Without Organizations by Clay Shirky
- The Tao of Programming
- Computational Beauty of Nature
- Writing Solid Code by Steve Maguire
- Philip and Alex’s Guide to Web Publishing
- Object-Oriented Analysis and Design with Applications by Grady Booch
- Effective Java by Joshua Bloch
- Computability by N. J. Cutland
- Masterminds of Programming
- The Tao Te Ching
- The Productive Programmer
- The Art of Deception by Kevin Mitnick
- The Career Programmer: Guerilla Tactics for an Imperfect World by Christopher Duncan
- Paradigms of Artificial Intelligence Programming: Case studies in Common Lisp
- Masters of Doom
- Pragmatic Unit Testing in C# with NUnit by Andy Hunt and Dave Thomas with Matt Hargett
- How To Solve It by George Polya
- The Alchemist by Paulo Coelho
- Smalltalk-80: The Language and its Implementation
- Writing Secure Code (2nd Edition) by Michael Howard
- Introduction to Functional Programming by Philip Wadler and Richard Bird
- No Bugs! by David Thielen
- Rework by Jason Freid and DHH
- JUnit in Action
Source: Wikipedia
Hidden Features of C#
What are the most hidden features or tricks of C# that even C# fans, addicts, experts barely know?
Here are the revealed features so far:
Keywords
yield
by Michael Stumvar
by Michael Stumusing()
statement by kokosreadonly
by kokosas
by Mike Stoneas
/is
by Ed Swangrenas
/is
(improved) by Rocketpantsdefault
by deathofratsglobal::
by pzycomanusing()
blocks by AlexCusevolatile
by Jakub Šturcextern alias
by Jakub Šturc
Attributes
DefaultValueAttribute
by Michael StumObsoleteAttribute
by DannySmurfDebuggerDisplayAttribute
by StuDebuggerBrowsable
andDebuggerStepThrough
by bdukesThreadStaticAttribute
by marxidadFlagsAttribute
by Martin ClarkeConditionalAttribute
by AndrewBurns
Syntax
??
(coalesce nulls) operator by kokos- Number flaggings by Nick Berardi
where T:new
by Lars Mæhlum- Implicit generics by Keith
- One-parameter lambdas by Keith
- Auto properties by Keith
- Namespace aliases by Keith
- Verbatim string literals with @ by Patrick
enum
values by lfoust- @variablenames by marxidad
event
operators by marxidad- Format string brackets by Portman
- Property accessor accessibility modifiers by xanadont
- Conditional (ternary) operator (
?:
) by JasonS checked
andunchecked
operators by Binoj Antonyimplicit and explicit
operators by Flory
Language Features
- Nullable types by Brad Barker
- Anonymous types by Keith
__makeref __reftype __refvalue
by Judah Himango- Object initializers by lomaxx
- Format strings by David in Dakota
- Extension Methods by marxidad
partial
methods by Jon Erickson- Preprocessor directives by John Asbeck
DEBUG
pre-processor directive by Robert Durgin- Operator overloading by SefBkn
- Type inferrence by chakrit
- Boolean operators taken to next level by Rob Gough
- Pass value-type variable as interface without boxing by Roman Boiko
- Programmatically determine declared variable type by Roman Boiko
- Static Constructors by Chris
- Easier-on-the-eyes / condensed ORM-mapping using LINQ by roosteronacid
__arglist
by Zac Bowling
Visual Studio Features
- Select block of text in editor by Himadri
- Snippets by DannySmurf
Framework
TransactionScope
by KiwiBastardDependantTransaction
by KiwiBastardNullable<T>
by IainMHMutex
by DiagoSystem.IO.Path
by ageektrappedWeakReference
by Juan Manuel
Methods and Properties
String.IsNullOrEmpty()
method by KiwiBastardList.ForEach()
method by KiwiBastardBeginInvoke()
,EndInvoke()
methods by Will DeanNullable<T>.HasValue
andNullable<T>.Value
properties by RismoGetValueOrDefault
method by John Sheehan
Tips & Tricks
- Nice method for event handlers by Andreas H.R. Nilsson
- Uppercase comparisons by John
- Access anonymous types without reflection by dp
- A quick way to lazily instantiate collection properties by Will
- JavaScript-like anonymous inline-functions by roosteronacid
Other
- netmodules by kokos
- LINQBridge by Duncan Smart
- Parallel Extensions by Joel Coehoorn
- This isn’t C# per se, but I haven’t seen anyone who really uses
System.IO.Path.Combine()
to the extent that they should. In fact, the whole Path class is really useful, but no one uses it! - lambdas and type inference are underrated. Lambdas can have multiple statements and they double as a compatible delegate object automatically (just make sure the signature match) as in:
Console.CancelKeyPress +=
(sender, e) => {
Console.WriteLine("CTRL+C detected!\n");
e.Cancel = true;
};
- From Rick Strahl: You can chain the ?? operator so that you can do a bunch of null comparisons.
string result = value1 ?? value2 ?? value3 ?? String.Empty;
- From CLR via C#:
When normalizing strings, it is highly recommended that you use ToUpperInvariant instead of ToLowerInvariant because Microsoft has optimized the code for performing uppercase comparisons.
I remember one time my coworker always changed strings to uppercase before comparing. I’ve always wondered why he does that because I feel it’s more “natural” to convert to lowercase first. After reading the book now I know why.
- My favorite trick is using the null coalesce operator and parentheses to automagically instantiate collections for me.
private IList<Foo> _foo;
public IList<Foo> ListOfFoo
{ get { return _foo ?? (_foo = new List<Foo>()); } }
- Here are some interesting hidden C# features, in the form of undocumented C# keywords:
__makeref
__reftype
__refvalue
__arglist
These are undocumented C# keywords (even Visual Studio recognizes them!) that were added to for a more efficient boxing/unboxing prior to generics. They work in coordination with the System.TypedReference struct.
There’s also __arglist, which is used for variable length parameter lists.
One thing folks don’t know much about is System.WeakReference — a very useful class that keeps track of an object but still allows the garbage collector to collect it.
The most useful “hidden” feature would be the yield return keyword. It’s not really hidden, but a lot of folks don’t know about it. LINQ is built atop this; it allows for delay-executed queries by generating a state machine under the hood. Raymond Chen recently posted about the internal, gritty details.
- Using @ for variable names that are keywords.
var @object = new object();
var @string = "";
var @if = IpsoFacto();
- If you want to exit your program without calling any finally blocks or finalizers use FailFast:
Environment.FailFast()
Read more hidden C# Features at Hidden Features of C#? – Stack Overflow
Hidden Features of python
- Argument Unpacking
- Braces
- Chaining Comparison Operators
- Decorators
- Default Argument Gotchas / Dangers of Mutable Default arguments
- Descriptors
- Dictionary default
.get
value - Docstring Tests
- Ellipsis Slicing Syntax
- Enumeration
- For/else
- Function as iter() argument
- Generator expressions
import this
- In Place Value Swapping
- List stepping
__missing__
items- Multi-line Regex
- Named string formatting
- Nested list/generator comprehensions
- New types at runtime
.pth
files- ROT13 Encoding
- Regex Debugging
- Sending to Generators
- Tab Completion in Interactive Interpreter
- Ternary Expression
try/except/else
- Unpacking+
print()
function with
statement- Source: stackoverflow
Source: stackoveflow
What IDE to Use for Python
Acronyms used:
L - Linux
W - Windows
M - Mac
C - Commercial
F - Free
CF - Commercial with Free limited edition
? - To be confirmed
What is The right JSON content type?
For JSON text:
application/json
Example: { "Name": "Foo", "Id": 1234, "Rank": 7 }
For JSONP (runnable JavaScript) with callback:
application/javascript
Example: functionCall({"Name": "Foo", "Id": 1234, "Rank": 7});
Here are some blog posts that were mentioned in the relevant comments:
- Why you shouldn’t use
text/html
for JSON - Internet Explorer sometimes has issues with
application/json
- A rather complete list of Mimetypes and what to use them for
- The official mime type list at IANA from @gnrfan’s answer below
IANA has registered the official MIME Type for JSON as application/json
.
When asked about why not text/json
, Crockford seems to have said JSON is not really JavaScript nor text and also IANA was more likely to hand out application/*
than text/*
.
More resources:
JSON (JavaScript Object Notation) and JSONP (“JSON with padding”) formats seems to be very similar and therefore it might be very confusing which MIME type they should be using. Even though the formats are similar, there are some subtle differences between them.
So whenever in any doubts, I have a very simple approach (which works perfectly fine in most cases), namely, go and check corresponding RFC document.
JSON RFC 4627 (The application/json Media Type for JavaScript Object Notation (JSON)) is a specifications of JSON format. It says in section 6, that the MIME media type for JSON text is
application/json.
JSONP JSONP (“JSON with padding”) is handled different way than JSON, in a browser. JSONP is treated as a regular JavaScript script and therefore it should use application/javascript,
the current official MIME type for JavaScript. In many cases, however, text/javascript
MIME type will work fine too.
Note that text/javascript
has been marked as obsolete by RFC 4329 (Scripting Media Types) document and it is recommended to use application/javascript
type instead. However, due to legacy reasons, text/javascript
is still widely used and it has cross-browser support (which is not always a case with application/javascript
MIME type, especially with older browsers).
What are some mistakes to avoid while learning programming?
- Over use of the GOTO statement. Most schools teach this is a NO;NO
- Not commenting your code with proper documentation – what exactly does the code do??
- Endless LOOP. A structured loop that has NO EXIT point
- Overwriting memory – destroying data and/or code. Especially with Dynamic Allocation;Stacks;Queues
- Not following discipline – Requirements, Design, Code, Test, Implementation
Moreover complex code should have a BLUEPRINT – Design. That is like saying let’s build a house without a floor plan. Code/Programs that have a requirements and design specification BEFORE writing code tends to have a LOWER error rate. Less time debugging and fixing errors. Source: QUora
Lisp.
The thing that always struck me is that the best programmers I would meet or read all had a couple of things in common.
- They didn’t use IDEs, preferring Emacs or Vim.
- They all learned or used Functional Programming (Lisp, Haskel, Ocaml)
- They all wrote or endorsed some kind of testing, even if it’s just minimal TDD.
- They avoided fads and dependencies like a plague.
It is a basic truth that learning Lisp, or any functional programming, will fundamentally change the way you program and think about programming. Source: Quora
What are the Top 20 lesser known but cool data structures?
1- Tries, also known as prefix-trees or crit-bit trees, have existed for over 40 years but are still relatively unknown. A very cool use of tries is described in “TRASH – A dynamic LC-trie and hash data structure“, which combines a trie with a hash function.
2- Bloom filter: Bit array of m bits, initially all set to 0.
To add an item you run it through k hash functions that will give you k indices in the array which you then set to 1.
To check if an item is in the set, compute the k indices and check if they are all set to 1.
Of course, this gives some probability of false-positives (according to wikipedia it’s about 0.61^(m/n) where n is the number of inserted items). False-negatives are not possible.
Removing an item is impossible, but you can implement counting bloom filter, represented by array of ints and increment/decrement.
3- Rope: It’s a string that allows for cheap prepends, substrings, middle insertions and appends. I’ve really only had use for it once, but no other structure would have sufficed. Regular strings and arrays prepends were just far too expensive for what we needed to do, and reversing everthing was out of the question.
4- Skip lists are pretty neat.
Wikipedia
A skip list is a probabilistic data structure, based on multiple parallel, sorted linked lists, with efficiency comparable to a binary search tree (order log n average time for most operations).
They can be used as an alternative to balanced trees (using probalistic balancing rather than strict enforcement of balancing). They are easy to implement and faster than say, a red-black tree. I think they should be in every good programmers toolchest.
If you want to get an in-depth introduction to skip-lists here is a link to a video of MIT’s Introduction to Algorithms lecture on them.
Also, here is a Java applet demonstrating Skip Lists visually.
5– Spatial Indices, in particular R-trees and KD-trees, store spatial data efficiently. They are good for geographical map coordinate data and VLSI place and route algorithms, and sometimes for nearest-neighbor search.
Bit Arrays store individual bits compactly and allow fast bit operations.
6-Zippers – derivatives of data structures that modify the structure to have a natural notion of ‘cursor’ — current location. These are really useful as they guarantee indicies cannot be out of bound — used, e.g. in the xmonad window manager to track which window has focused.
Amazingly, you can derive them by applying techniques from calculus to the type of the original data structure!
7- Suffix tries. Useful for almost all kinds of string searching (http://en.wikipedia.org/wiki/Suffix_trie#Functionality). See also suffix arrays; they’re not quite as fast as suffix trees, but a whole lot smaller.
8- Splay trees (as mentioned above). The reason they are cool is threefold:
-
- They are small: you only need the left and right pointers like you do in any binary tree (no node-color or size information needs to be stored)
- They are (comparatively) very easy to implement
- They offer optimal amortized complexity for a whole host of “measurement criteria” (log n lookup time being the one everybody knows). See http://en.wikipedia.org/wiki/Splay_tree#Performance_theorems
9- Heap-ordered search trees: you store a bunch of (key, prio) pairs in a tree, such that it’s a search tree with respect to the keys, and heap-ordered with respect to the priorities. One can show that such a tree has a unique shape (and it’s not always fully packed up-and-to-the-left). With random priorities, it gives you expected O(log n) search time, IIRC.
10- A niche one is adjacency lists for undirected planar graphs with O(1) neighbour queries. This is not so much a data structure as a particular way to organize an existing data structure. Here’s how you do it: every planar graph has a node with degree at most 6. Pick such a node, put its neighbors in its neighbor list, remove it from the graph, and recurse until the graph is empty. When given a pair (u, v), look for u in v’s neighbor list and for v in u’s neighbor list. Both have size at most 6, so this is O(1).
By the above algorithm, if u and v are neighbors, you won’t have both u in v’s list and v in u’s list. If you need this, just add each node’s missing neighbors to that node’s neighbor list, but store how much of the neighbor list you need to look through for fast lookup.
11-Lock-free alternatives to standard data structures i.e lock-free queue, stack and list are much overlooked.
They are increasingly relevant as concurrency becomes a higher priority and are much more admirable goal than using Mutexes or locks to handle concurrent read/writes.
Here’s some links
http://www.cl.cam.ac.uk/research/srg/netos/lock-free/
http://www.research.ibm.com/people/m/michael/podc-1996.pdf [Links to PDF]
http://www.boyet.com/Articles/LockfreeStack.html
Mike Acton’s (often provocative) blog has some excellent articles on lock-free design and approaches
12- I think Disjoint Set is pretty nifty for cases when you need to divide a bunch of items into distinct sets and query membership. Good implementation of the Union and Find operations result in amortized costs that are effectively constant (inverse of Ackermnan’s Function, if I recall my data structures class correctly).
13- Fibonacci heaps
They’re used in some of the fastest known algorithms (asymptotically) for a lot of graph-related problems, such as the Shortest Path problem. Dijkstra’s algorithm runs in O(E log V) time with standard binary heaps; using Fibonacci heaps improves that to O(E + V log V), which is a huge speedup for dense graphs. Unfortunately, though, they have a high constant factor, often making them impractical in practice.
14- Anyone with experience in 3D rendering should be familiar with BSP trees. Generally, it’s the method by structuring a 3D scene to be manageable for rendering knowing the camera coordinates and bearing.
Binary space partitioning (BSP) is a method for recursively subdividing a space into convex sets by hyperplanes. This subdivision gives rise to a representation of the scene by means of a tree data structure known as a BSP tree.
In other words, it is a method of breaking up intricately shaped polygons into convex sets, or smaller polygons consisting entirely of non-reflex angles (angles smaller than 180°). For a more general description of space partitioning, see space partitioning.
Originally, this approach was proposed in 3D computer graphics to increase the rendering efficiency. Some other applications include performing geometrical operations with shapes (constructive solid geometry) in CAD, collision detection in robotics and 3D computer games, and other computer applications that involve handling of complex spatial scenes.
15- Huffman trees – used for compression.
16- Have a look at Finger Trees, especially if you’re a fan of the previously mentioned purely functional data structures. They’re a functional representation of persistent sequences supporting access to the ends in amortized constant time, and concatenation and splitting in time logarithmic in the size of the smaller piece.
As per the original article:
Our functional 2-3 finger trees are an instance of a general design technique in- troduced by Okasaki (1998), called implicit recursive slowdown. We have already noted that these trees are an extension of his implicit deque structure, replacing pairs with 2-3 nodes to provide the flexibility required for efficient concatenation and splitting.
A Finger Tree can be parameterized with a monoid, and using different monoids will result in different behaviors for the tree. This lets Finger Trees simulate other data structures.
17- Circular or ring buffer– used for streaming, among other things.
18- I’m surprised no one has mentioned Merkle trees (ie. Hash Trees).
Used in many cases (P2P programs, digital signatures) where you want to verify the hash of a whole file when you only have part of the file available to you.
19- <zvrba> Van Emde-Boas trees
I think it’d be useful to know why they’re cool. In general, the question “why” is the most important to ask 😉
My answer is that they give you O(log log n) dictionaries with {1..n} keys, independent of how many of the keys are in use. Just like repeated halving gives you O(log n), repeated sqrting gives you O(log log n), which is what happens in the vEB tree.
20- An interesting variant of the hash table is called Cuckoo Hashing. It uses multiple hash functions instead of just 1 in order to deal with hash collisions. Collisions are resolved by removing the old object from the location specified by the primary hash, and moving it to a location specified by an alternate hash function. Cuckoo Hashing allows for more efficient use of memory space because you can increase your load factor up to 91% with only 3 hash functions and still have good access time.
Honorable mentions: splay trees, Cuckoo Hashing, min-max heap, Cache Oblivious datastructures, Left Leaning Red-Black Trees, Work Stealing Queue, Bootstrapped skew-binomial heaps , Kd-Trees, MX-CIF Quadtrees, HAMT, Inverted Index, Fenwick Tree, Ball Tress, Van Emde-Boas trees. Nested sets , half-edge data structure , Scapegoat trees, unrolled linked list, 2-3 Finger Trees, Pairing heaps , Interval Trees, XOR Linked List, Binary decision diagram, The Region Quadtree, treaps, Counted unsorted balanced btrees, Arne Andersson trees , DAWGs , BK-Trees, or Burkhard-Keller Trees , Zobrist Hashing, Persistent Data Structures, B* tree, Deletable Bloom Filters (DlBF)
Ring-Buffer, Skip lists, Priority deque, Ternary Search Tree, FM-index, PQ-Trees, sparse matrix data structures, Delta list/delta queue, Bucket Brigade, Burrows–Wheeler transform , corner-stitched data structure. Disjoint Set Forests, Binomial heap, Cycle Sort
Is there any way to make interpreted languages such as Python just as fast as C++? Why or why not?
Variable names in languages like Python are not bound to storage locations until run time. That means you have to look up each name to find out what storage it is bound to and what its type is before you can apply an operation like “+” to it. In C++, names are bound to storage at compile time, so no lookup is needed, and the type is fixed at compile time so the compiler can generate machine code with no overhead for interpretation. Late-bound languages will never be as fast as languages bound at compile time.
You could make a language that looks kinda like Python that is compile-time bound and statically typed. You could incrementally compile such a language. But you can also build an environment that incrementally compiles C++ so it would feel a lot like using Python. Try godbolt or tutorialspoint if you want to see this actually working for small programs.
Source: quora
I want to be a computer programmer when I grow up but I don’t have a high IQ. What should I do?
Have I got good news for you! No one has ever asked me my IQ, nor have I ever asked anyone for their IQ. This was true when I was a software engineer, and is true now that I’m a computer scientist.
Try to learn to program. If you can learn in an appropriate environment (a class with a good instructor), go from there. If you fail the first time, adjust your learning approach and try again. If you still can’t, find another future; you probably wouldn’t like computer programming, anyway. If you learn later, that’s fine.
Source: Here
Which are the hardest C++ concepts beginners struggle to understand? How would you have explained them?
Beginners to C++ will consistently struggle with getting a C++ program off the ground. Even “Hello World” can be a challenge. Making a GUI in C++ from scratch? Almost impossible in the beginning.
These 4 areas cannot be learned by any beginner to C++ in 1 day or even 1 month in most cases. These areas challenge nearly all beginners and I have seen cases where it can take a few months to teach.
These are the most fundamental things you need to be able to do to build and produce a program in C++.
Basic Challenge #1: Creating a Program File
- Compiling and linking, even in an IDE.
- Project settings in an IDE for C++ projects.
- Make files, scripts, environment variables affecting compilation.
Basic Challenge #2: Using Other People’s C++ Code
- Going outside the STL and using libraries.
- Proper library paths in source, file path during compile.
- Static versus dynamic libraries during linking.
- Symbol reference resolution.
Basic Challenge #3: Troubleshooting Code
- Deciphering compiler error messages.
- Deciphering linker error messages.
- Resolving segmentation faults.
Basic Challenge #4: Actual C++ Code
- Writing excellent if/loop/case/assign/call statements.
- Managing header/implementation files consistently.
- Rigorously avoiding name collisions while staying productive.
- Various forms of function callback, especially in GUIs.
You cannot explain any of them in a way that most persons will pick up right away. You can describe these things by way of analogy, you can even have learners mirror you at the same time you demonstrate them. I’ve done similar things with trainees in a work setting. In the end, it usually requires time on the order of months and years to pick up these things.
What and where are the stack and the heap?
- Where and what are they (physically in a real computer’s memory)?
- To what extent are they controlled by the OS or language run-time?
- What is their scope?
- What determines the size of each of them?
- What makes one faster?
The stack is the memory set aside as scratch space for a thread of execution. When a function is called, a block is reserved on the top of the stack for local variables and some bookkeeping data. When that function returns, the block becomes unused and can be used the next time a function is called. The stack is always reserved in a LIFO (last in first out) order; the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack; freeing a block from the stack is nothing more than adjusting one pointer.
The heap is memory set aside for dynamic allocation. Unlike the stack, there’s no enforced pattern to the allocation and deallocation of blocks from the heap; you can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time; there are many custom heap allocators available to tune heap performance for different usage patterns.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
To answer your questions directly:
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
What is their scope?
The stack is attached to a thread, so when the thread exits the stack is reclaimed. The heap is typically allocated at application startup by the runtime, and is reclaimed when the application (technically process) exits.
What determines the size of each of them?
The size of the stack is set when a thread is created. The size of the heap is set on application startup, but can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
The stack is faster because the access pattern makes it trivial to allocate and deallocate memory from it (a pointer/integer is simply incremented or decremented), while the heap has much more complex bookkeeping involved in an allocation or deallocation. Also, each byte in the stack tends to be reused very frequently which means it tends to be mapped to the processor’s cache, making it very fast. Another performance hit for the heap is that the heap, being mostly a global resource, typically has to be multi-threading safe, i.e. each allocation and deallocation needs to be – typically – synchronized with “all” other heap accesses in the program.
A clear demonstration:
Image source: vikashazrati.wordpress.com
Stack:
- Stored in computer RAM just like the heap.
- Variables created on the stack will go out of scope and are automatically deallocated.
- Much faster to allocate in comparison to variables on the heap.
- Implemented with an actual stack data structure.
- Stores local data, return addresses, used for parameter passing.
- Can have a stack overflow when too much of the stack is used (mostly from infinite or too deep recursion, very large allocations).
- Data created on the stack can be used without pointers.
- You would use the stack if you know exactly how much data you need to allocate before compile time and it is not too big.
- Usually has a maximum size already determined when your program starts.
Heap:
- Stored in computer RAM just like the stack.
- In C++, variables on the heap must be destroyed manually and never fall out of scope. The data is freed with
delete
,delete[]
, orfree
. - Slower to allocate in comparison to variables on the stack.
- Used on demand to allocate a block of data for use by the program.
- Can have fragmentation when there are a lot of allocations and deallocations.
- In C++ or C, data created on the heap will be pointed to by pointers and allocated with
new
ormalloc
respectively. - Can have allocation failures if too big of a buffer is requested to be allocated.
- You would use the heap if you don’t know exactly how much data you will need at run time or if you need to allocate a lot of data.
- Responsible for memory leaks.
Example:
int foo()
{
char *pBuffer; //<--nothing allocated yet (excluding the pointer itself, which is allocated here on the stack).
bool b = true; // Allocated on the stack.
if(b)
{
//Create 500 bytes on the stack
char buffer[500];
//Create 500 bytes on the heap
pBuffer = new char[500];
}//<-- buffer is deallocated here, pBuffer is not
}//<--- oops there's a memory leak, I should have called delete[] pBuffer;
he most important point is that heap and stack are generic terms for ways in which memory can be allocated. They can be implemented in many different ways, and the terms apply to the basic concepts.
-
In a stack of items, items sit one on top of the other in the order they were placed there, and you can only remove the top one (without toppling the whole thing over).
The simplicity of a stack is that you do not need to maintain a table containing a record of each section of allocated memory; the only state information you need is a single pointer to the end of the stack. To allocate and de-allocate, you just increment and decrement that single pointer. Note: a stack can sometimes be implemented to start at the top of a section of memory and extend downwards rather than growing upwards.
-
In a heap, there is no particular order to the way items are placed. You can reach in and remove items in any order because there is no clear ‘top’ item.
Heap allocation requires maintaining a full record of what memory is allocated and what isn’t, as well as some overhead maintenance to reduce fragmentation, find contiguous memory segments big enough to fit the requested size, and so on. Memory can be deallocated at any time leaving free space. Sometimes a memory allocator will perform maintenance tasks such as defragmenting memory by moving allocated memory around, or garbage collecting – identifying at runtime when memory is no longer in scope and deallocating it.
These images should do a fairly good job of describing the two ways of allocating and freeing memory in a stack and a heap. Yum!
-
To what extent are they controlled by the OS or language runtime?
As mentioned, heap and stack are general terms, and can be implemented in many ways. Computer programs typically have a stack called a call stack which stores information relevant to the current function such as a pointer to whichever function it was called from, and any local variables. Because functions call other functions and then return, the stack grows and shrinks to hold information from the functions further down the call stack. A program doesn’t really have runtime control over it; it’s determined by the programming language, OS and even the system architecture.
A heap is a general term used for any memory that is allocated dynamically and randomly; i.e. out of order. The memory is typically allocated by the OS, with the application calling API functions to do this allocation. There is a fair bit of overhead required in managing dynamically allocated memory, which is usually handled by the runtime code of the programming language or environment used.
-
What is their scope?
The call stack is such a low level concept that it doesn’t relate to ‘scope’ in the sense of programming. If you disassemble some code you’ll see relative pointer style references to portions of the stack, but as far as a higher level language is concerned, the language imposes its own rules of scope. One important aspect of a stack, however, is that once a function returns, anything local to that function is immediately freed from the stack. That works the way you’d expect it to work given how your programming languages work. In a heap, it’s also difficult to define. The scope is whatever is exposed by the OS, but your programming language probably adds its rules about what a “scope” is in your application. The processor architecture and the OS use virtual addressing, which the processor translates to physical addresses and there are page faults, etc. They keep track of what pages belong to which applications. You never really need to worry about this, though, because you just use whatever method your programming language uses to allocate and free memory, and check for errors (if the allocation/freeing fails for any reason).
-
What determines the size of each of them?
Again, it depends on the language, compiler, operating system and architecture. A stack is usually pre-allocated, because by definition it must be contiguous memory. The language compiler or the OS determine its size. You don’t store huge chunks of data on the stack, so it’ll be big enough that it should never be fully used, except in cases of unwanted endless recursion (hence, “stack overflow”) or other unusual programming decisions.
A heap is a general term for anything that can be dynamically allocated. Depending on which way you look at it, it is constantly changing size. In modern processors and operating systems the exact way it works is very abstracted anyway, so you don’t normally need to worry much about how it works deep down, except that (in languages where it lets you) you mustn’t use memory that you haven’t allocated yet or memory that you have freed.
-
What makes one faster?
The stack is faster because all free memory is always contiguous. No list needs to be maintained of all the segments of free memory, just a single pointer to the current top of the stack. Compilers usually store this pointer in a special, fast register for this purpose. What’s more, subsequent operations on a stack are usually concentrated within very nearby areas of memory, which at a very low level is good for optimization by the processor on-die caches.
- Both the stack and the heap are memory areas allocated from the underlying operating system (often virtual memory that is mapped to physical memory on demand).
- In a multi-threaded environment each thread will have its own completely independent stack but they will share the heap. Concurrent access has to be controlled on the heap and is not possible on the stack.
The heap
- The heap contains a linked list of used and free blocks. New allocations on the heap (by
new
ormalloc
) are satisfied by creating a suitable block from one of the free blocks. This requires updating the list of blocks on the heap. This meta information about the blocks on the heap is also stored on the heap often in a small area just in front of every block. - As the heap grows new blocks are often allocated from lower addresses towards higher addresses. Thus you can think of the heap as a heap of memory blocks that grows in size as memory is allocated. If the heap is too small for an allocation the size can often be increased by acquiring more memory from the underlying operating system.
- Allocating and deallocating many small blocks may leave the heap in a state where there are a lot of small free blocks interspersed between the used blocks. A request to allocate a large block may fail because none of the free blocks are large enough to satisfy the allocation request even though the combined size of the free blocks may be large enough. This is called heap fragmentation.
- When a used block that is adjacent to a free block is deallocated the new free block may be merged with the adjacent free block to create a larger free block effectively reducing the fragmentation of the heap.
The stack
- The stack often works in close tandem with a special register on the CPU named the stack pointer. Initially the stack pointer points to the top of the stack (the highest address on the stack).
- The CPU has special instructions for pushing values onto the stack and popping them off the stack. Each push stores the value at the current location of the stack pointer and decreases the stack pointer. A pop retrieves the value pointed to by the stack pointer and then increases the stack pointer (don’t be confused by the fact that adding a value to the stack decreases the stack pointer and removing a value increases it. Remember that the stack grows to the bottom). The values stored and retrieved are the values of the CPU registers.
- If a function has parameters, these are pushed onto the stack before the call to the function. The code in the function is then able to navigate up the stack from the current stack pointer to locate these values.
- When a function is called the CPU uses special instructions that push the current instruction pointer onto the stack, i.e. the address of the code executing on the stack. The CPU then jumps to the function by setting the instruction pointer to the address of the function called. Later, when the function returns, the old instruction pointer is popped off the stack and execution resumes at the code just after the call to the function.
- When a function is entered, the stack pointer is decreased to allocate more space on the stack for local (automatic) variables. If the function has one local 32 bit variable four bytes are set aside on the stack. When the function returns, the stack pointer is moved back to free the allocated area.
- Nesting function calls work like a charm. Each new call will allocate function parameters, the return address and space for local variables and these activation records can be stacked for nested calls and will unwind in the correct way when the functions return.
- As the stack is a limited block of memory, you can cause a stack overflow by calling too many nested functions and/or allocating too much space for local variables. Often the memory area used for the stack is set up in such a way that writing below the bottom (the lowest address) of the stack will trigger a trap or exception in the CPU. This exceptional condition can then be caught by the runtime and converted into some kind of stack overflow exception.
Can a function be allocated on the heap instead of a stack?
No, activation records for functions (i.e. local or automatic variables) are allocated on the stack that is used not only to store these variables, but also to keep track of nested function calls.
How the heap is managed is really up to the runtime environment. C uses malloc
and C++ uses new
, but many other languages have garbage collection.
However, the stack is a more low-level feature closely tied to the processor architecture. Growing the heap when there is not enough space isn’t too hard since it can be implemented in the library call that handles the heap. However, growing the stack is often impossible as the stack overflow only is discovered when it is too late; and shutting down the thread of execution is the only viable option.
In the following C# code
public void Method1()
{
int i = 4;
int y = 2;
class1 cls1 = new class1();
}
Here’s how the memory is managed
Local Variables
that only need to last as long as the function invocation go in the stack. The heap is used for variables whose lifetime we don’t really know up front but we expect them to last a while. In most languages it’s critical that we know at compile time how large a variable is if we want to store it on the stack.
Objects (which vary in size as we update them) go on the heap because we don’t know at creation time how long they are going to last. In many languages the heap is garbage collected to find objects (such as the cls1 object) that no longer have any references.
In Java, most objects go directly into the heap. In languages like C / C++, structs and classes can often remain on the stack when you’re not dealing with pointers.
More information can be found here:
The difference between stack and heap memory allocation « timmurphy.org
and here:
Creating Objects on the Stack and Heap
This article is the source of picture above: Six important .NET concepts: Stack, heap, value types, reference types, boxing, and unboxing – CodeProject
but be aware it may contain some inaccuracies.
The Stack When you call a function the arguments to that function plus some other overhead is put on the stack. Some info (such as where to go on return) is also stored there. When you declare a variable inside your function, that variable is also allocated on the stack.
Deallocating the stack is pretty simple because you always deallocate in the reverse order in which you allocate. Stack stuff is added as you enter functions, the corresponding data is removed as you exit them. This means that you tend to stay within a small region of the stack unless you call lots of functions that call lots of other functions (or create a recursive solution).
The Heap The heap is a generic name for where you put the data that you create on the fly. If you don’t know how many spaceships your program is going to create, you are likely to use the new (or malloc or equivalent) operator to create each spaceship. This allocation is going to stick around for a while, so it is likely we will free things in a different order than we created them.
Thus, the heap is far more complex, because there end up being regions of memory that are unused interleaved with chunks that are – memory gets fragmented. Finding free memory of the size you need is a difficult problem. This is why the heap should be avoided (though it is still often used).
Implementation Implementation of both the stack and heap is usually down to the runtime / OS. Often games and other applications that are performance critical create their own memory solutions that grab a large chunk of memory from the heap and then dish it out internally to avoid relying on the OS for memory.
This is only practical if your memory usage is quite different from the norm – i.e for games where you load a level in one huge operation and can chuck the whole lot away in another huge operation.
Physical location in memory This is less relevant than you think because of a technology called Virtual Memory which makes your program think that you have access to a certain address where the physical data is somewhere else (even on the hard disc!). The addresses you get for the stack are in increasing order as your call tree gets deeper. The addresses for the heap are un-predictable (i.e implimentation specific) and frankly not important.
In Short
A stack is used for static memory allocation and a heap for dynamic memory allocation, both stored in the computer’s RAM.
In Detail
The Stack
The stack is a “LIFO” (last in, first out) data structure, that is managed and optimized by the CPU quite closely. Every time a function declares a new variable, it is “pushed” onto the stack. Then every time a function exits, all of the variables pushed onto the stack by that function, are freed (that is to say, they are deleted). Once a stack variable is freed, that region of memory becomes available for other stack variables.
The advantage of using the stack to store variables, is that memory is managed for you. You don’t have to allocate memory by hand, or free it once you don’t need it any more. What’s more, because the CPU organizes stack memory so efficiently, reading from and writing to stack variables is very fast.
More can be found here.
The Heap
The heap is a region of your computer’s memory that is not managed automatically for you, and is not as tightly managed by the CPU. It is a more free-floating region of memory (and is larger). To allocate memory on the heap, you must use malloc() or calloc(), which are built-in C functions. Once you have allocated memory on the heap, you are responsible for using free() to deallocate that memory once you don’t need it any more.
If you fail to do this, your program will have what is known as a memory leak. That is, memory on the heap will still be set aside (and won’t be available to other processes). As we will see in the debugging section, there is a tool called Valgrind that can help you detect memory leaks.
Unlike the stack, the heap does not have size restrictions on variable size (apart from the obvious physical limitations of your computer). Heap memory is slightly slower to be read from and written to, because one has to use pointers to access memory on the heap. We will talk about pointers shortly.
Unlike the stack, variables created on the heap are accessible by any function, anywhere in your program. Heap variables are essentially global in scope.
More can be found here.
Variables allocated on the stack are stored directly to the memory and access to this memory is very fast, and its allocation is dealt with when the program is compiled. When a function or a method calls another function which in turns calls another function, etc., the execution of all those functions remains suspended until the very last function returns its value. The stack is always reserved in a LIFO order, the most recently reserved block is always the next block to be freed. This makes it really simple to keep track of the stack, freeing a block from the stack is nothing more than adjusting one pointer.
Variables allocated on the heap have their memory allocated at run time and accessing this memory is a bit slower, but the heap size is only limited by the size of virtual memory. Elements of the heap have no dependencies with each other and can always be accessed randomly at any time. You can allocate a block at any time and free it at any time. This makes it much more complex to keep track of which parts of the heap are allocated or free at any given time.
You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.
In a multi-threaded situation each thread will have its own completely independent stack, but they will share the heap. The stack is thread specific and the heap is application specific. The stack is important to consider in exception handling and thread executions.
Each thread gets a stack, while there’s typically only one heap for the application (although it isn’t uncommon to have multiple heaps for different types of allocation).
At run-time, if the application needs more heap, it can allocate memory from free memory and if the stack needs memory, it can allocate memory from free memory allocated memory for the application.
Even, more detail is given here and here.
Now come to your question’s answers.
To what extent are they controlled by the OS or language runtime?
The OS allocates the stack for each system-level thread when the thread is created. Typically the OS is called by the language runtime to allocate the heap for the application.
More can be found here.
What is their scope?
Already given in top.
“You can use the stack if you know exactly how much data you need to allocate before compile time, and it is not too big. You can use the heap if you don’t know exactly how much data you will need at runtime or if you need to allocate a lot of data.”
More can be found in here.
What determines the size of each of them?
The size of the stack is set by OS when a thread is created. The size of the heap is set on application startup, but it can grow as space is needed (the allocator requests more memory from the operating system).
What makes one faster?
Stack allocation is much faster since all it really does is move the stack pointer. Using memory pools, you can get comparable performance out of heap allocation, but that comes with a slight added complexity and its own headaches.
Also, stack vs. heap is not only a performance consideration; it also tells you a lot about the expected lifetime of objects.
Details can be found from here.
How do you stop scripters from slamming your website hundreds of times a second?
How about implementing something like SO does with the CAPTCHAs?
If you’re using the site normally, you’ll probably never see one. If you happen to reload the same page too often, post successive comments too quickly, or something else that triggers an alarm, make them prove they’re human. In your case, this would probably be constant reloads of the same page, following every link on a page quickly, or filling in an order form too fast to be human.
If they fail the check x times in a row (say, 2 or 3), give that IP a timeout or other such measure. Then at the end of the timeout, dump them back to the check again.
Since you have unregistered users accessing the site, you do have only IPs to go on. You can issue sessions to each browser and track that way if you wish. And, of course, throw up a human-check if too many sessions are being (re-)created in succession (in case a bot keeps deleting the cookie).
As far as catching too many innocents, you can put up a disclaimer on the human-check page: “This page may also appear if too many anonymous users are viewing our site from the same location. We encourage you to register or login to avoid this.” (Adjust the wording appropriately.)
Besides, what are the odds that X people are loading the same page(s) at the same time from one IP? If they’re high, maybe you need a different trigger mechanism for your bot alarm.
Edit: Another option is if they fail too many times, and you’re confident about the product’s demand, to block them and make them personally CALL you to remove the block.
Having people call does seem like an asinine measure, but it makes sure there’s a human somewhere behind the computer. The key is to have the block only be in place for a condition which should almost never happen unless it’s a bot (e.g. fail the check multiple times in a row). Then it FORCES human interaction – to pick up the phone.
In response to the comment of having them call me, there’s obviously that tradeoff here. Are you worried enough about ensuring your users are human to accept a couple phone calls when they go on sale? If I were so concerned about a product getting to human users, I’d have to make this decision, perhaps sacrificing a (small) bit of my time in the process.
Since it seems like you’re determined to not let bots get the upper hand/slam your site, I believe the phone may be a good option. Since I don’t make a profit off your product, I have no interest in receiving these calls. Were you to share some of that profit, however, I may become interested. As this is your product, you have to decide how much you care and implement accordingly.
The other ways of releasing the block just aren’t as effective: a timeout (but they’d get to slam your site again after, rinse-repeat), a long timeout (if it was really a human trying to buy your product, they’d be SOL and punished for failing the check), email (easily done by bots), fax (same), or snail mail (takes too long).
You could, of course, instead have the timeout period increase per IP for each time they get a timeout. Just make sure you’re not punishing true humans inadvertently.
Performance optimization strategies as a last resort
Let’s assume:
- the code already is working correctly
- the algorithms chosen are already optimal for the circumstances of the problem
- the code has been measured, and the offending routines have been isolated
- all attempts to optimize will also be measured to ensure they do not make matters worse
OK, you’re defining the problem to where it would seem there is not much room for improvement. That is fairly rare, in my experience. I tried to explain this in a Dr. Dobbs article in November 1993, by starting from a conventionally well-designed non-trivial program with no obvious waste and taking it through a series of optimizations until its wall-clock time was reduced from 48 seconds to 1.1 seconds, and the source code size was reduced by a factor of 4. My diagnostic tool was this. The sequence of changes was this:
-
The first problem found was use of list clusters (now called “iterators” and “container classes”) accounting for over half the time. Those were replaced with fairly simple code, bringing the time down to 20 seconds.
-
Now the largest time-taker is more list-building. As a percentage, it was not so big before, but now it is because the bigger problem was removed. I find a way to speed it up, and the time drops to 17 seconds.
-
Now it is harder to find obvious culprits, but there are a few smaller ones that I can do something about, and the time drops to 13 sec.
Now I seem to have hit a wall. The samples are telling me exactly what it is doing, but I can’t seem to find anything that I can improve. Then I reflect on the basic design of the program, on its transaction-driven structure, and ask if all the list-searching that it is doing is actually mandated by the requirements of the problem.
Then I hit upon a re-design, where the program code is actually generated (via preprocessor macros) from a smaller set of source, and in which the program is not constantly figuring out things that the programmer knows are fairly predictable. In other words, don’t “interpret” the sequence of things to do, “compile” it.
- That redesign is done, shrinking the source code by a factor of 4, and the time is reduced to 10 seconds.
Now, because it’s getting so quick, it’s hard to sample, so I give it 10 times as much work to do, but the following times are based on the original workload.
-
More diagnosis reveals that it is spending time in queue-management. In-lining these reduces the time to 7 seconds.
-
Now a big time-taker is the diagnostic printing I had been doing. Flush that – 4 seconds.
-
Now the biggest time-takers are calls to malloc and free. Recycle objects – 2.6 seconds.
-
Continuing to sample, I still find operations that are not strictly necessary – 1.1 seconds.
Total speedup factor: 43.6
Now no two programs are alike, but in non-toy software I’ve always seen a progression like this. First you get the easy stuff, and then the more difficult, until you get to a point of diminishing returns. Then the insight you gain may well lead to a redesign, starting a new round of speedups, until you again hit diminishing returns. Now this is the point at which it might make sense to wonder whether ++i
or i++
or for(;;)
or while(1)
are faster: the kinds of questions I see so often on Stack Overflow.
P.S. It may be wondered why I didn’t use a profiler. The answer is that almost every one of these “problems” was a function call site, which stack samples pinpoint. Profilers, even today, are just barely coming around to the idea that statements and call instructions are more important to locate, and easier to fix, than whole functions.
I actually built a profiler to do this, but for a real down-and-dirty intimacy with what the code is doing, there’s no substitute for getting your fingers right in it. It is not an issue that the number of samples is small, because none of the problems being found are so tiny that they are easily missed.
ADDED: jerryjvl requested some examples. Here is the first problem. It consists of a small number of separate lines of code, together taking over half the time:
/* IF ALL TASKS DONE, SEND ITC_ACKOP, AND DELETE OP */
if (ptop->current_task >= ILST_LENGTH(ptop->tasklist){
. . .
/* FOR EACH OPERATION REQUEST */
for ( ptop = ILST_FIRST(oplist); ptop != NULL; ptop = ILST_NEXT(oplist, ptop)){
. . .
/* GET CURRENT TASK */
ptask = ILST_NTH(ptop->tasklist, ptop->current_task)
These were using the list cluster ILST (similar to a list class). They are implemented in the usual way, with “information hiding” meaning that the users of the class were not supposed to have to care how they were implemented. When these lines were written (out of roughly 800 lines of code) thought was not given to the idea that these could be a “bottleneck” (I hate that word). They are simply the recommended way to do things. It is easy to say in hindsight that these should have been avoided, but in my experience all performance problems are like that. In general, it is good to try to avoid creating performance problems. It is even better to find and fix the ones that are created, even though they “should have been avoided” (in hindsight). I hope that gives a bit of the flavor.
Here is the second problem, in two separate lines:
/* ADD TASK TO TASK LIST */
ILST_APPEND(ptop->tasklist, ptask)
. . .
/* ADD TRANSACTION TO TRANSACTION QUEUE */
ILST_APPEND(trnque, ptrn)
These are building lists by appending items to their ends. (The fix was to collect the items in arrays, and build the lists all at once.) The interesting thing is that these statements only cost (i.e. were on the call stack) 3/48 of the original time, so they were not in fact a big problem at the beginning. However, after removing the first problem, they cost 3/20 of the time and so were now a “bigger fish”. In general, that’s how it goes.
I might add that this project was distilled from a real project I helped on. In that project, the performance problems were far more dramatic (as were the speedups), such as calling a database-access routine within an inner loop to see if a task was finished.
REFERENCE ADDED: The source code, both original and redesigned, can be found in www.ddj.com, for 1993, in file 9311.zip, files slug.asc and slug.zip.
EDIT 2011/11/26: There is now a SourceForge project containing source code in Visual C++ and a blow-by-blow description of how it was tuned. It only goes through the first half of the scenario described above, and it doesn’t follow exactly the same sequence, but still gets a 2-3 order of magnitude speedup.
Suggestions:
- Pre-compute rather than re-calculate: any loops or repeated calls that contain calculations that have a relatively limited range of inputs, consider making a lookup (array or dictionary) that contains the result of that calculation for all values in the valid range of inputs. Then use a simple lookup inside the algorithm instead.
Down-sides: if few of the pre-computed values are actually used this may make matters worse, also the lookup may take significant memory. - Don’t use library methods: most libraries need to be written to operate correctly under a broad range of scenarios, and perform null checks on parameters, etc. By re-implementing a method you may be able to strip out a lot of logic that does not apply in the exact circumstance you are using it.
Down-sides: writing additional code means more surface area for bugs. - Do use library methods: to contradict myself, language libraries get written by people that are a lot smarter than you or me; odds are they did it better and faster. Do not implement it yourself unless you can actually make it faster (i.e.: always measure!)
- Cheat: in some cases although an exact calculation may exist for your problem, you may not need ‘exact’, sometimes an approximation may be ‘good enough’ and a lot faster in the deal. Ask yourself, does it really matter if the answer is out by 1%? 5%? even 10%?
Down-sides: Well… the answer won’t be exact.
When you can’t improve the performance any more – see if you can improve the perceived performance instead.
You may not be able to make your fooCalc algorithm faster, but often there are ways to make your application seem more responsive to the user.
A few examples:
- anticipating what the user is going to request and start working on that before then
- displaying results as they come in, instead of all at once at the end
- Accurate progress meter
These won’t make your program faster, but it might make your users happier with the speed you have.
I spend most of my life in just this place. The broad strokes are to run your profiler and get it to record:
- Cache misses. Data cache is the #1 source of stalls in most programs. Improve cache hit rate by reorganizing offending data structures to have better locality; pack structures and numerical types down to eliminate wasted bytes (and therefore wasted cache fetches); prefetch data wherever possible to reduce stalls.
- Load-hit-stores. Compiler assumptions about pointer aliasing, and cases where data is moved between disconnected register sets via memory, can cause a certain pathological behavior that causes the entire CPU pipeline to clear on a load op. Find places where floats, vectors, and ints are being cast to one another and eliminate them. Use
__restrict
liberally to promise the compiler about aliasing. - Microcoded operations. Most processors have some operations that cannot be pipelined, but instead run a tiny subroutine stored in ROM. Examples on the PowerPC are integer multiply, divide, and shift-by-variable-amount. The problem is that the entire pipeline stops dead while this operation is executing. Try to eliminate use of these operations or at least break them down into their constituent pipelined ops so you can get the benefit of superscalar dispatch on whatever the rest of your program is doing.
- Branch mispredicts. These too empty the pipeline. Find cases where the CPU is spending a lot of time refilling the pipe after a branch, and use branch hinting if available to get it to predict correctly more often. Or better yet, replace branches with conditional-moves wherever possible, especially after floating point operations because their pipe is usually deeper and reading the condition flags after fcmp can cause a stall.
- Sequential floating-point ops. Make these SIMD.
And one more thing I like to do:
- Set your compiler to output assembly listings and look at what it emits for the hotspot functions in your code. All those clever optimizations that “a good compiler should be able to do for you automatically”? Chances are your actual compiler doesn’t do them. I’ve seen GCC emit truly WTF code.
More suggestions:
-
Avoid I/O: Any I/O (disk, network, ports, etc.) is always going to be far slower than any code that is performing calculations, so get rid of any I/O that you do not strictly need.
-
Move I/O up-front: Load up all the data you are going to need for a calculation up-front, so that you do not have repeated I/O waits within the core of a critical algorithm (and maybe as a result repeated disk seeks, when loading all the data in one hit may avoid seeking).
-
Delay I/O: Do not write out your results until the calculation is over, store them in a data structure and then dump that out in one go at the end when the hard work is done.
-
Threaded I/O: For those daring enough, combine ‘I/O up-front’ or ‘Delay I/O’ with the actual calculation by moving the loading into a parallel thread, so that while you are loading more data you can work on a calculation on the data you already have, or while you calculate the next batch of data you can simultaneously write out the results from the last batch.
I love all the
- graph algorithms in particular the Bellman Ford Algorithm
- Scheduling algorithms the Round-Robin scheduling algorithm in particular.
- Dynamic Programming algorithms the Knapsack fractional algorithm in particular.
- Backtracking algorithms the 8-Queens algorithm in particular.
- Greedy algorithms the Knapsack 0/1 algorithm in particular.
We use all these algorithms in our daily life in various forms at various places.
For example every shopkeeper applies anyone or more of the several scheduling algorithms to service his customers. Depending upon his service policy and situation. No one of the scheduling algorithm fits all the situations.
All of us mentally apply one of the graph algorithms when we plan the shortest route to be taken when we go out for doing multiple things in one trip.
All of us apply one of the Greedy algorithms while selecting career, job, girlfriends, friends etc.
All of us apply one of the Dynamic programming algorithms when we do simple multiplication mentally by referring to the various mathematical products table in our memory.
How much faster is C compared to Python?
Which sorting algorithm does Python Sorted use?
It uses TimSort, a sort algorithm which was invented by Tim Peters, and is now used in other languages such as Java.
TimSort is a complex algorithm which uses the best of many other algorithms, and has the advantage of being stable – in others words if two elements A & B are in the order A then B before the sort algorithm and those elements test equal during the sort, then the algorithm Guarantees that the result will maintain that A then B ordering.
That does mean for example if you want to say order a set of student scores by score and then name (so equal scores are ordered already alphabetically) then you can sort by name and then sort by score.
TimSort has good performance against data sets which are partially sorted or already sorted (areas where some other algorithms struggle).
Run Your Python Code Online Here
I’m currently coding a SAT solver algorithm that will have to take millions of input data, and I was wondering if I should switch from Python to C.
Answer: Using best-of-class equivalent algorithms optimized compiled C code is often multiple orders of magnitude faster than Python code interpreted by CPython (the main Python implementation). Other Python implementations (like PyPy) might be a bit better, but not vastly so. Some computations fit Python better, but I have a feeling that a SAT solver implementation will not be competitive if written using Python.
All that said, do you need to write a new implementation? Could you use one of the excellent ones out there? CDCL implementations often do a good job, and there are various open-source ones readily available (e.g., this one: https://github.com/togatoga/togasat
Comments:
1- I mean, also it depends. I recall seeing an analysis some time ago, that showed CPython can be as fast as C … provided you are almost exclusively using library functions written in C. That being said, for any non-trivial python program it will probably be the case that you must spend quite a bit of time in the interpreter, and not in C library functions.

The dot operator’s left hand operand is an object reference, which is needed for the access to contextualize with said object’s state.
The double colon operator has two uses in this case:
- Access to a static class member.
- Access to a parent class member (needs the context of a running method which can be static or not).
Why Rust?
There are two main reasons: performance and familiarity. While Rust has been shown to be faster than C++, it’s not as fast as assembly language—and many developers have been working in assembly for so long that they’re not willing to give it up.
However, there’s another reason why some developers are sticking with C++: compiler optimization.
C++ compilers are more intelligent than Rust compilers when it comes to optimizing code for performance, so if you’re looking for top-notch performance from your application, then you might want to stick with C++ until the Rust compiler has caught up.
The C++ programming language definition is written in English and in other human languages. Programming language definitions are written for humans to read. They are not written in programming languages.
An actual implementation of a C++ compiler (or interpreter) can be written in any general-purpose programming language. Some are written in C, some are written in C++, some are written in other programming languages. Some are written with the help of compiler development tools and infrastructure (e.g., lex, yacc, flex, bison, antlr, LLVM, etc.). It just depends on the specific C++ implementation you’re looking at.
This is true of all high-level programming languages. Any general-purpose programming language can be used to implement a compiler or interpreter, no matter what programming language you are compiling or interpreting.
Learn other languages. It will broaden your perspective and hopefully make you a better developer.
Alan Perlis, one of the developers of ALGOL, once said, “A language that doesn’t affect the way you think about programming, is not worth knowing.”
Conversely, that implies learning other languages can and will affect the way you think about programming, provided you get some variety of exposure.
C++ is a multiparadigm language. But if you haven’t had exposure to those paradigms in a more focused setting, you might not understand the value they bring, or their strengths, weaknesses, idioms, and insights.
So even if you do the bulk of your programming in C++, you may not be using it the most effective way possible.
I know I personally have gaps, because I haven’t explored certain paradigms myself. I owe it to myself to at least dip my toe in some of them. I know this, because every time I learn a new language or environment, I sense a gap closing—a gap I may not have been aware of previously.
You don’t even need to spend a lot of time to gain value, either. I may have only spent a week with Scala, for example, but I learned more than just the base language from it. I hadn’t really encountered fold and match expressions as such basic and integral concepts, for example.
And despite its negative reputation, I found Perl to be an excellent language to learn about multiple programming techniques.
Mark Jason Dominus’ Higher Order Perl opened my eyes a number of techniques that I believe originated more from the LISP world.
Example: Partial Function Application
In Perl, you can implement partial function application (sometimes conflated with the related concept currying) with you eyes closed and one hand behind your back. Suppose I want to bind the first argument of foo()
:
- my $f = sub { return foo(arg1, @_); };
Now I can invoke $f
as a function with that first argument bound, with a slight syntax tweak: &$f(…)
or $f->(…)
. I don’t even need to think about the rest.
Trying to learn about that for the first time in C++ likely would have lost the forest for the trees.
C++98 was quite primitive. It offered std::bind1st
and std::bind2nd
for 2-argument function objects only. Boost offered boost::bind
,[1] which had its own limitations. And because these were relatively uncommon, they were unfamiliar to many C++ users (at least among the crowd I was in). C++ lambdas (introduced with C++11) help, but they don’t work for arbitrary arguments until C++14. For that, you probably need parameter packs, forwarding references, and std::forward
.[2] And then there’s object lifetimes to consider, so for your bound arguments you might need to trade off between copy, move, capturing a reference, smart pointers,[3][4] etc. Oh, and finally, it won’t yield a function pointer, but rather an function object, so it’s not usable in places that need a pure function pointer. Although, if it manages to be capture-less, it can provide a pure function pointer by applying unary +
to it…
Can you see how you might lose the forest for the trees here?
If you didn’t already have some idea of the usefulness of partial application, would you even try? If you hadn’t encountered the concept before, would it have even come to mind when you saw lambdas?
Punchline
In practice, if you’re already well versed in C++, it’s not actually all that difficult to implement techniques like partial application in C++. You’re already accustomed to the rigamarole described above, since C++ confronts you with those sorts of decisions regularly.
It does cloud things noticeably, however. Learning the concepts in a simpler environment separates you from the implementation noise.
Learn other languages and become a more rounded and hopefully better developer. Step away from C++’s innumerable trees of details to see different areas of the forest more clearly.
Footnotes
In C++, how can a template object be deleted with or without the delete keyword? (template <class T> class Obj;)
If you allocated it with new
, then delete it by passing the pointer to delete
, just like any other pointer. There’s nothing particularly special about a pointer to an object whose type happens to be a template.
Most of the time, though, you shouldn’t be calling new
and delete
directly.
See: https://youtu.be/JfmTagWcqoE
When should “new” be used in C++?
new’s use should be confined to very narrow use-cases. Examples of use cases where new is ok:
- Writing low-level memory management code such as allocators and deallocators, smart pointers, etc.
- Working with code/libraries that uses outdated C++ programming idioms like QT — but then narrowly limited to the extent necessary to work with QT
- You are going to need to preallocate an object to pass to an API that indicates it will assume ownership of (i.e. responsibility for deleting the object and delete). If you are going to work with that object at all before passing it off, you should not use new. (use a unique pointer and call .release(), when calling the API).
The way to dynamically allocate memory correctly in modern C++ is std::make_unique or std::make_shared. The first returns a unique_pointer to the allocated object (which will delete the object for you when it goes out of scope) or std::shared_pointer which can be copied around — the object will be deleted for you when there are no more copies of the shared pointer.
Why do array indexes start with 0 (zero) in many programming languages?
Array indices should start at 0. This is not just an efficiency hack for ancient computers, or a reflection of the underlying memory model, or some other kind of historical accident—forget all of that. Zero-based indexing actually simplifies array-related math for the programmer, and simpler math leads to fewer bugs. Here are some examples.
- Suppose you’re writing a hash table that maps each integer key to one of
n
buckets. If your array of buckets is indexed starting at 0, you can writebucket = key mod n
; but if it’s indexed starting at 1, you have to writebucket = (key mod n) + 1
. - Suppose you’re writing code to serialize a rectangular array of pixels, with width
w
and heighth
, to a file (which we’ll think of as a one-dimensional array of lengthw*h
). With 0-indexed arrays, pixel (x
,y
) goes into positiony*w + x
; with 1-indexed arrays, pixel (x
,y
) goes into positiony*w + x - w
. - Suppose you want to put the letters
‘A’
through‘Z’
into an array of length 26, and you have a functionord
that maps a character to its ASCII value. With 0-indexed arrays, the characterc
is put at indexord(c) - ord(‘A’)
; with 1-indexed arrays, it’s put at indexord(c) - ord(‘A’) + 1
.
It’s in fact one-based indexing that’s the historical accident—human languages needed numbers for “first”, “second”, etc. before we had invented zero. For a practical example of the kinds of problems this accident leads to, consider how the 1800s—well, no, actually, the period from January 1, 1801 through December 31, 1900—came to be known as the “19th century”.
No, almost no one uses Python libraries for machine learning.
Before you start listing counterexamples, notice the emphasized words. Yes, a lot of people use Python for machine learning, because it allows for very fast prototyping and overall exploration of problem space, but none of the libraries they are using for it are actually written in Python. Indeed, they are almost always written in either Fortran or C++ instead, and just interface with Python through some thin wrapper.
The slowness of Python is completely irrelevant if the only thing you do with it is invoking a library function written in highly-optimized C++.
Python’s a great language, but I’m going to mention a few reasons why you might choose Java:
- It’s fast, handles multithreaded well, and scales.
- It’s built for security.
- It has a great ecosystem. (Which includes a deep learning library I work on.)
- Many companies have bet their stack on Java, so there’s demand for Java programmers.
- The JVM is cross-platform, and uses run-time information to manage itself.
- It takes care of memory management.
- Java 8 has lambda expressions, and includes an impl of Javascript called Nashorn that runs on the JVM.
- Static typing: Java is typesafe, and its static typing is essentially a form of self-documenting code.
- Java is mature: It’s been around for 20 years, it’s fully backward compatible, and code written decades ago still works.
- Android: Java 7 works on the world’s largest mobile OS.
For those and other reasons, Java is one of the world’s most widely used languages. Oracle says there are 10 million Java programmers worldwide. The Github stats from Eduardo Bonet speak volumes.
It’s a basic programming skills challenge.
If you understand loops, variables and conditionals only, that’s enough to hack out a FizzBuzz. If you’re a bit further along the path, you can write a cleaner FizzBuzz
The challenge itself is about writing fizz and buzz when a number is exactly divisible by 3 or 5. It’s not really important, except that it steers you to use those elements of programming above.
It can be done in any language as those concepts are foundational to every language.
Let me open with a quote that you’ve probably seen many times:
premature optimization is the root of all evil.
— Donald Knuth
- Programs are regularly gigantic. If you profile a program that isn’t fast enough, you’ll often find histograms that shows the top 1,000 functions all taking well under 0.1% of the execution time. “Optimizing” those 1,000 functions is usually not practical and would likely not achieve the desired speedup anyway.
- The number of executed instructions is often relatively irrelevant. Instead, the number of cache misses is far more critical, but it’s also much harder to locate them. Avoiding cache misses is something that may require design work up front, because it affects core data structures.
- Machines are highly heterogeneous, and extracting performance is not just a matter of dealing with the main CPU cores (which may not be homogeneous!), but also to arrange for efficient use of vector units, and accelerators like GPUs, media co-processors, and neural engines. Utilizing all those units is also something that may require design work up front.
- Performance is not just a matter of execution time. It’s also a matter of energy consumption and scalability. And response time: More than ever, software is interactive, and yet has to deal with new kinds of latencies (e.g., from networking).
- Software is an independent industry: If your version 1.0 is too slow, or uses to much battery life, or chokes your data center, you might not get a chance at developing an optimized version 1.5. (In 1974, software was mostly an add-on to hardware.)
- Software is built from independent components: While developing a specific component, you might not know just how hard it will be pushed. If you don’t design for performance from the start you may end up painting yourself into a corner.
All that to say that Knuth’ quote should be taken for what it is: Don’t optimize local instruction counts early on. But don’t skip thinking about optimizing design and data structures from the start, because if performance matters in any way (throughput, latency, energy use, or scalability) it’s something that’s difficult or impossible to “retrofit”. Things to think about:
- How will you evaluate performance? How will you track it during the development and maintenance process?
- How can you avoid computation that’s not needed? This might mean to architect for “lazy evaluation”.
- How will you lay out your data for efficient access (i.e., make best use of the memory hierarchy)?
- How will you organize your algorithms and data structures to take advantage of the available computational resources?
- When considering algorithms, what regime will they work in? A traditional example: “Fast” sorting algorithms are typically only preferable once there are enough elements (often 50+) to sort; if you know that you’ll be repeatedly sorting a dozen elements, those algorithms may not be your best option.
- Are the complexities introduced to achieve better performance worth their overall (negative?) impact on the project?
When all that is handled adequately, you might eventually have to deal with “nitty gritty code optimization”, and it will have a chance to be meaningful.
Now, regarding the original question:
What do most programmers do (when optimizing code) that is essentially wrong?
I don’t think that’s generalizable. I think Knuth’s quote is often mis-construed… but I wouldn’t say that “most programmers” do that. I’m not even sure that “most programmers” optimize code at all. I also think that Knuth’s quote is often ignored, and that’s not great either… but again, I’d venture that it doesn’t involve “most programmers”. Programmers are a very diverse bunch, with many diverse roles, working on a great diversity of projects that may or may not have concrete performance constraints.
In other words, I think the question has no meaningful answer.
Finally, I’d like to close with a quote from the late Len Lattanzi (whom I had the pleasure of having as a colleague for a few years):
Belated pessimization is the leaf of no good.
— Len Lattanzi
Pros and Cons of Java vs Node vs .Net, Which stack should I go for, .NET, Node, or Java?
Umm, that’s really up to you. But there are some tradeoffs.
Java:
Pros:
Extremely widely used. You’ll never want for a job if you are good at it. Other languages (Scala, Kotlin, Groovy) run on the JVM as well. There is a lot of cool big data processing that you can use Java for (Apache Spark, Hadoop, etc.).
Cons:
Tons of bloatware (WebSphere, WebLogic, Adobe Experience Manager) runs on Java. You’re likely to end up coding up some legacy enterprise garbage. UIs written in Java are crap at best.
.NET:
Pros:
Well supported by Microsoft. Visual Studio is gorgeous.
Cons:
Not so many open source libraries, you’ll likely be coding for Windows. This means that your development machine will be Windows (dealbreaker for me). Also, no cool little startup will use .NET ever. Not as many jobs as Java. UIs written in .NET are crap at best.
Node.js
Pros:
Much more concise and faster to develop for than either .NET or Java. Almost as many open source libraries as there are for Java.
Cons:
Memory management, thread management, and overall performance aren’t as good as Java or .NET. You’ll have a harder time finding a Node.js job unless you also know a client side JS framework such as Vue.js or React.js. In that case, you’ll be very much in demand.
Others:
If you want to stick to server side coding, you should consider RUST and Golang. Both are more performant than any of the above. Benchmarks I’ve read suggest that RUST is overall more performant but that Golang has better concurrency management.
- Inside another object that wraps them
- Two methods returning one object each
- pass in two objects as collaborating parameters so methods can be called on them
The second way is good in OO. You do your calculation once, store the two results as state in an object, use two separate accessors in the calling code.
Do all pointers have the same size in C++?
Theoretically, no. Not even for a given system. A char*
may have a size different from an int*
.
In practice, yes.
First, note that all pointers to object types (as opposed to function types) must be able to round-trip through void*
(modulo cv-qualification). So if different object pointer types had different sizes, void*
would have to be as large as the largest of them.
Second, for pointers to object types there aren’t many potential advantages to having them be of different size. Why make things complex if they can be made simple at no perceivable cost?
Third… plenty of reasonable code “out there” assumes that all pointers have the same size. So building an implementation where that’s not the case handicaps that implementation right out of the gate.
For function pointers it may actually sometimes be interesting from a performance point of view to give them twice the size of ordinary pointers, because they may have to encapsulate both the address of the function and the address of the associated data segment (in shared library models where a separate data segment is created for every shared library instance). However, because of compatibility considerations even those implementations just add an indirection to keep the function pointers compatible with void*
(even though function pointers are not strictly required by the standard to round-trip through void*
).
- Try it and measure it
- On the scale of bad programming, if is at the bottom of the list.
Compilers are very smart about these things. as an example, consider the alternatives
Both compile to the exact same code sequence, which does NOT include a branch:
Source code is supposed to be a way to express your intent to the computer. You really should write the source to be as clear as possible and leave the microoptimizations to the compiler. Once you get the program working and correct, then you can look at performance. Use profiling tools to figure out where the time is going and speed up the parts that are slow AND where being slow actually matters.
By the way, you shouldn’t be afraid of branches either. The branch prediction logic in modern processors is nearly telepathic. AMD is using neural nets inside the chip (!). The predictors will correctly guess what is going to happen more than 90% of the time.
What does strongly typed mean in programming?
The other answers are mistaken. This is a very common confusion. They describe statically typed language, not strongly typed language. There is a big difference.
Strongly typed vs weakly typed:
In strongly typed languages you get an error if the types do not match in an expression. It does not matter if the type is determined at compile time (static types) or runtime (dynamic types).
Both java and python are strongly typed. In both languages, you get an error if you try to add objects with unmatching types. For example, in python, you get an error if you try to add a number and a string:
- >>> a = 10
- >>> b = “hello”
- >>> a + b
- Traceback (most recent call last):
- File “<stdin>”, line 1, in <module>
- TypeError: unsupported operand type(s) for +: ‘int’ and ‘str’
In Python, you get this error at runtime. In Java, you would get a similar error at compile time. Most statically typed languages are also strongly typed.
The opposite of strongly typed language is weakly typed. In a weakly typed language, there are implicit type conversions. Instead of giving you an error, it will convert one of the values automatically and produce a result, even if such conversion loses data. This often leads to unexpected and unpredictable behavior.
Javascript is an example of a weakly typed language.
- > let a = 10
- > let b = “hello”
- > a + b
- ’10hello’
Instead of an error, JavaScript will convert a to string and then concatenate the strings.
Static types vs dynamic types:
In a statically typed language, variables are bound types and may only hold data of that type. Typically you declare variables and specify the type of data that the variable has. In some languages, the type can be deduced from what you assign to it, but it still holds that the variable is bound to that type. For example, in java:
- int a = 3;
- a = “hello” // Error, a can only contain integers
in a dynamically typed language, variables may hold any type of data. The type of the data is simply determined by what gets assigned to the variable at runtime. Python is dynamically typed, for example:
- a = 10
- a = “hello”
- # no problem, a first held an integer and then a string
Comments:
#1: Don’t confuse strongly typed with statically typed.
Python is dynamically typed and strongly typed.
Javascript is dynamically typed and weakly typed.
Java is statically typed and strongly typed.
C is statically typed and weekly typed.
See these articles for a longer explanation:
Magic lies here – Statically vs Dynamically Typed Languages
Key differences between mainly used languages for data science
I also added a drawing that illustrates how strong and static typing relate to each other:
Python is dynamically typed because types are determined at runtime. The opposite of dynamically typed is statically typed (not strongly typed)
Python is strongly typed because it will give errors when types don’t match instead of performing implicit conversion. The opposite of strongly typed is weakly typed
Python is strongly typed and dynamically typed
What is the difference between finalize() and destructor in Java?
Finalize() is not guaranteed to be called and the programmer has no control over what time or in what order finalizers are called.
They are useless and should be ignored.
A destructor is not part of Java. It is a C++ language feature with very precise definitions of when it will be called.
Comments:
1- Until we got to languages like Rust (with the Drop trait) and a few others was C++ the only language which had the destructor as a concept? I feel like other languages were inspired from that.
2- Many others manage memory for you, even predating C: COBOL, FORTRAN and so on. That’s another driver why there isn’t much attention to destructors
What are some ways to avoid writing static helper classes in Java?
Mainly getting out of that procedural ‘function operates on parameters passed in’ mindset.
Tactically, the static can normally be moved onto one of the parameter objects. Or all the parameters become an object that the static moves to. A new object might be needed. Once done the static is now a fully fledged method on an object and is not static anymore.
I view this as a positive iterative step in discovering objects for a system.
For cases where a static makes sense (? none come to mind) then a good practice is to move it closer to where it is used either in the same package or on a class that is strongly related.
I avoid having global ‘Utils’ classes full of statics that are unrelated. That’s fairly basic design, keeping unrelated things separate. In this case, the SOLID ISP principle applies: segregate into smaller, more focused interfaces.
Is there any programming language as easy as python and as fast and efficient as C++, if yes why it’s not used very often instead of C or C++ in low level programming like embedded systems, AAA 2D and 3D video games, or robotic?
Not really. I use Python occasionally for “quick hacks” – programs that I’ll probably run once and then delete – also, because I use “blender” for 3D modeling and Python is it’s scripting language.
I used to write quite a bit of JavaScript for web programming but since WASM came along and allows me to run C++ at very nearly full speed inside a web browser, I write almost zero JavaScript these days.
I use C++ for almost everything.
Once you get to know C++ it’s no harder than Python – the main thing I find great about Python is the number of easy-to-find libraries.
But in AAA games – the poor performance of Python pretty much rules it out.
In embedded systems, the computer is generally too small to fit a Python interpreter into memory – so C or C++ is a more likely choice.
Typescript vs JavaScript: Similarities and Differences
JavaScript is a scripting language, that was developed by EMCA’s Technical Committee and Brendan Eich. It works perfectly in web-browsers without the help of any web-server or a compiler. It allows you to change HTML and CSS in the browsers without a full page reload. That is why it is used to create dynamic and interactive web pages.
TypeScript is a superset of the JavaScript language. It was presented and developed by Microsoft technical fellow Anders Hejlsberg in 2012. Typescript has appeared for a certain reason: the more JavaScript grew, the heavier and more unreadable js code became. It turned up especially evident when developers started to use JavaScript for server-side technologies.
TypeScript is an open-source language that has a compiler, that converts TypeScript code to JavaScript code (see TypeScript playground service). That compiler is cross-browser and also open-source. To start using TypeScript, you can rename your .js files to .ts files, and if there are no logical mistakes in the js code, you get valid TypeScript code. So, TypeScript code Is JavaScript code (and vice versa) just with some additions. To learn more about those additions, watch the original video presentation of TypeScript. Meanwhile, we discuss the key differences between JS and TS in 2022.

I think TypeScript *is* pretty popular, within the constraints it has.
Node.js is 1.8% of websites, and TypeScript is seldom used outside of Node.js. That really means TypeScript has limited potential for use there.
You can use TypeScript on the client-side, but it can be a pain to set up, and unless you have quite a lot of client-side logic, it might not be worth it.
Personally, I think TypeScript on the client-side is well worth the effort, but not really worth it on the server side, where there are so many options outside of a JS runtime.
I don’t think anybody says JavaScript is a dead language. I think its long term future is pretty bleak though, for two reasons:
- TypeScript.
- WebAssembly.
The entire Internet doesn’t run on JavaScript, in fact hardly any of it does, what you mean is the *web*. The web and the Internet are two different things, and while JavaScript is of course ubiquitous in web sites, practically no Internet infrastructure is using JavaScript.
If you consider the Internet to be the road infrastructure and cars, the web is the screaming babies in the back seats.
Unless you can write really good TypeScript code, you’re probably better off sticking to JavaScript – if you have that option of course.
The main advantage of JS vs TS in an interview is that equivalent code will be much quicker to write with JS, as you don’t have to write type annotations and what not. The time that you have to spend mechanically writing code is not negligible and time is off the essence.
Then again, the better you are at TypeScript, the less this will make a difference. Also, in TypeScript there are more ways to write functionally equivalent code, so when you’re really great at TS you’re more susceptible to pick the very best way to express what you want to do, so your expertise and good coding style is more evident. Finally, with good TS you should be able to avoid writing some tests that may be necessary in JS, and your coding style is naturally more defensive, which is good.
Of those, TypeScript/Node.js/React is an easy answer. Though I’d also strongly recommend TypeScript on the frontend as well. If you skip Redux and instead use React Hooks you should find that TypeScript is a good fit.
But I wouldn’t use MySQL. PostgreSQL is stronger on almost every axis at this point, and given the lack of specificity of the purpose of the web site, I wouldn’t even necessarily recommend PostgreSQL over a half dozen other types of database.
Listen, if you want to design a web site such that it can grow, you need to make key technology choices strategically. If you’re using PostgreSQL, you can nearly seamlessly switch to CockroachDB, for instance, for much easier distributed database performance. Unless your database needs support for Geo-indexing, in which case you might need to split data between CockroachDB and MongoDB (edit: CockroachDB added Geo-index support!). Or if your website would benefit from a graph database, maybe OrientDB would be best.
Designing a website architecture is something that should be done by experienced experts. And the design goes deeper than just the technology choices. You need an architect who knows how to coordinate the architecture and the data flow your specific app will require. Otherwise you could paint yourself into a corner and end up with a site that’s failing at load with no easy path to fixing it, just at the point when your users are asking for more features.
A common cop-out inspired by the agile community is to claim that you just “ignore” the design and optimize later, but the truth is that many services that rely on that approach simply fail when they start to get traction.
Ironically, given your list of companies to be like, Facebook largely succeeded because a previous successful competitor, Friendster, couldn’t keep up with its expansion. The architecture had too many bottlenecks for them to scale horizontally, and they started hemorrhaging users by the thousands when the users found the site to be unresponsive too often. So if you want to be a Facebook, then plan for scaling from the start; otherwise the odds are good you’ll be a Friendster instead.
Not that Facebook necessarily planned it out in advance. I suspect they were instead just lucky. But “being lucky” isn’t a business plan.
A few good ones:
- Java + Spring Boot
- C# .NET
- Node + Express + Typescript
- Go
Can’t go wrong with any of those, really. I personally don’t care too much for the Node solution, but it’s plenty capable (if you can stomach that whole JS ecosystem thing)
What is a simple C++ program to find the average of 2 numbers?
This was actually one of the interview questions I got when I applied at Google.
“Write a function that returns the average of two number.”
So I did, they way you would expect. (x+y)/2. I did it as a C++ template so it works for any kind of number.
interviewer: “What’s wrong with it?”
Well, I suppose there could be an overflow if adding the two numbers requires more than space than the numeric type can hold. So I rewrote it as (x/2) + (y/2).
interviewer: “What’s wrong with it now?”
Well, I think we are losing a little precision by pre-dividing. So I wrote it another way.
interviewer: “What’s wrong with it now?”
And that went on for about 10 minutes. It ended with us talking about the heat death of the universe.
I got the job and ended up working with the guy. He said he had never done that before. He had just wanted to see what would happen.
Comments:
1-
The big problem you get with x/2 + y/2 is that it can/will give incorrect answers for integer inputs. For example, let’s average 3 and 3. The result should obviously be 3.
But with integer division, 3/2 = 1, and 1+1 = 2.
You need to add one to the result if and only if both inputs are odd.
2- Here’s what I’d do in C++ for integers, which I believe does the right thing including getting the rounding direction correct, and it can likely be made into a template that will do the right thing as well. This is not complete code, but I believe it gets the details correct…

That will work for any signed or unsigned integer type for op1 and op2 as long as they have the same type.
If you want it to do something intelligently where one of the operands is an unsigned type and the other one is a signed type, you could do it, but you need to define exactly what should happen, and realize that it’s quite likely that for maximum arithmetic correctness, the output type may need to be different than either input type. For instance, the average of a uint32_t and an int32_t can be too large to fit in an int32_t, and it can also be too small to fit in a uint32_t, so you probably need to go with a larger signed integer type, maybe int64_t.
3- I would have answered the question with a question, “Tell me more about the input, error handling capability of your system, and is this typical of the level of challenge here at google?” Then I’d provide eye contact, sit back, and see what happens. Years ago I had an interview question that asked what classical problem was part of a pen plotter control system. I told the interviewer that it was TSP but that if you had to change pens, you had to consider how much time it took to switch. They offered me a job but I declined given the poor financial condition of the company (SGI) which I discovered by asking the interviewer questions of my own. IMO: questions are at the heart of engineering. The interviewer, if they are smart, wants to see if you are capable of discovering the true nature of their problems. The best programmers I’ve ever worked with were able to get to the heart of problems and trade off solutions. Coding is a small part of the required skills.
What are some algorithms that computer hardware advances have made obsolete?
It depends on how you want to store and access data.
For the most part, as a general concept, old school cryptography is obsolete.
It was based on ciphers, which were based on it being mathematically “hard” to crack.
If you can throw a compute cluster at DES, even with a one byte “salt”, it’s pretty easy to crack a password database in seconds. Minutes, if your cluster is small.
Almost all computer security is base on big number theory. Today, that’s called:
What it means is that it’s hard to do math on very large numbers, and so if you have a large one, the larger the better.
Most cryptography today is based on elliptic curves.
But we know by the proof of Fermat’s last theorem, and specifically, the Taniyama-Shimura conjecture, is that all elliptic curves have modular forms.
And so this gives us an attack at all modern cryptogrphay, using graphical mathematics.
It’s an interesting field, and problem space.
Not one I’m interested in solving, since I’m sure it has already been solved by my “associates” who now work for the NSA.
I am only interested in new problems.
Comments:
1- Sorry, but this is just wrong. “Almost all cryptography,” counted by number of bytes encrypted and decrypted, uses AES. AES does not use “large numbers,” elliptic curves, or anything of that sort – it’s essentially combinatorial in nature, with a lot of bit-diddling – though there is some group theory at its based. The same can be said about cryptographic checksums such as the SHA series, including the latest “sponge” constructions.
Where RSA and elliptic curves and such come in is public key cryptography. This is important in setting up connections, but for multiple reasons (performance – but also for excellent cryptographic reasons) is not use for bulk encryption. There are related algorithms like Diffie-Hellman and some signature protocols like DSS. All of these “use large numbers” in some sense, but even that’s pushing it – elliptic curve cryptography involves doing math over … points on an elliptic curve, which does lead you to do some arithmetic, but the big advantage of elliptic curves is that the numbers are way, way smaller than for, say, RSA for equivalent security.
Much research these days is on “post-quantum cryptography” – cryptography that is secure against attacks by quantum computers (assuming we ever make those work). These tend not to be based on “arithmetic” in any straightforward sense – the ones that seem to be at the forefront these days are based on computation over lattices.
Cracking a password database that uses DES is so far away from what cryptography today is about that it’s not even related. Yes, the original Unix implementations – almost 50 years ago – used that approach. So?
C++ lambda functions are syntactic sugar for a longstanding set of practices in both C and C++: passing a function as an argument to another function, and possibly connecting a little bit of state to it.
This goes way back. Look at C’s qsort()
:
That last argument is a function pointer to a comparison function. You could use a captureless lambda for the same purpose in modern C++.
Sometimes, you want to tack a little bit of extra state alongside the function. In C, one way to do this is to provide an additional context pointer alongside the the function pointer. The context pointer will get passed back to the function as an argument.
I give an extended example in here:
In C++, that context pointer can be this
. When you do that, you have something called a function object. (Side note: function objects were sometimes called functors; however, functors aren’t really the same thing.)
If you overload the function call operator for a particular class, then objects of that class behave as function objects. That is, you can pretend like the object is a function by putting parentheses and an argument list after the name of an instance! When you arrive at the overloaded operator implementation, this
will point at the instance.
Instances of this class will add an offset to an integer. The function call operator is operator()
below.
and to use it:
That’ll print out the numbers 42, 43, 44, … 51 on separate lines.
And tying this back to the qsort()
example from earlier: C++’s std::sort
can take a function object for its comparison operator.
Modern C++’s lambda functions are syntactic sugar for function objects. They declare a class with an unutterable name, and then give you an instance of that class. Under the hood, the class’ constructor implements the capture, and initializes any state variables.
Other languages have similar constructs. I believe this one originated in LISP. It goes waaaay back.
As for any challenges associated with them: lifetime management. You potentially introduce a non-nested lifetime for any state associated with the callback, function object, or lambda.
If it’s all self contained (i.e. it keeps its own copies of everything), you’re less likely to have a problem. It owns all the state it relies on.
If it has non-owning pointers or references to other objects, you need to ensure the lifetime of your callback/function object/lambda remains within the lifetime of that other non-owned object. If that non-owned object’s lifetime isn’t naturally a superset of the callback/function object/lambda, you should consider taking a copy of that object, or reconsider your design.
Each one has specific strengths in terms of syntax features.
But the way to look at this is that all three are general purpose programming languages. You can write pretty much anything in them.
Trying to rank these languages in some kind of absolute hierarchy makes no sense and only leads to tribal ‘fanboi’ arguments.
If you need part of your code to talk to hardware, or could benefit from taking control of memory management, C++ is my choice.
General web service stuff, Java has an edge due to familiarity.
Anything involving a pre existing Microsoft component – eg data in SQL server, Azure – I will go all in on C#
I see more similarity than difference overall
Visual Studio Code is OK if you can’t find anything better for the language you’re using. There are better alternatives for most popular languages.
C# – Use Visual Studio Community, it’s free, and far better than Visual Studio Code.
Java – Use IntelliJ
Go – Goland.
Python – PyCharm.
C or C++ – CLion.
If you’re using a more unusual language, maybe Rust, Visual Studio Code might be a good choice.
Comments:
#1: Just chipping in here. I used to be a massive visual studio fan boy and loved my fancy gui for doing things without knowing what was actually happening. I’ve been using vscode and Linux for a few years now and am really enjoying the bare metal exposure you get with working on it (and linux) typing commands is way faster to get things done than mouse clicking through a bunch of guis. Both are good though.
#2: C# is unusual in that it’s the only language which doesn’t follow the maxim, “if JetBrains have blessed your language with attention, use their IDE”.
Visual Studio really is first class.
#3: for Rust as long as you have rust-analyzer and clippy, you’re good to go. Vim with lua and VS Code both work perfectly.
#4: This is definitely skirting the realm of opinion. It’s a great piece of software. There is better and worse stuff but it all depends upon the person using it, their skill, and style of development.
#5: VSCode is excellent for coding. I’ve been using it for about 6 years now, mainly for Python work, but also developing JS based mobile apps. I mainly use Visual Studio, but VSC’s slightly stripped back nature has been embellished with plenty of updates and more GUI discovery methods, plus that huge extensions library (I’ve worked with the creation of an intellisense style plugin as well).
I’m personally a fan of keeping it simple on IDEs, and I work in a lot of languages. I’m not installing 6 or 7 IDEs because they apparently have advantages in that specific language, so I’d rather install one IDE which can do a credible job on all of them.
I’m more a fan of developing software than getting anally retentive about knowing all the keyboard shortcuts to format a source file. Life’s too short for that. Way too short!
To each their own. Enjoy whatever you use!
Dmitry Aliev is correct that this
was introduced into the language before references.
I’ll take this question as an excuse to add a bit more color to this
.
C++ evolved from C via an early dialect called “C with Classes”, which was initially implemented with Cpre, a fancy “preprocessor” targeting C that didn’t fully parse the “C with Classes” language. What it did was add an implicit this
pointer parameter to member functions. E.g.:

was translated to something like:
- int f__1S(S *this);
(the funny name f__1S
is just an example of a possible “mangling” of the name of S::f
, which allows traditional linkers to deal with the richer naming environment of C++).
What might comes as a surprise to the modern C++ programmer is that in that model this
is an ordinary parameter variable and therefore it can be assigned to! Indeed, in the early implementations that was possible:

Interestingly, an idiom arose around this ability: Constructors could manage class-specific memory allocation by “assigning to this” before doing anything else in the constructor. E.g.:

That technique (brittle as it was, particularly when dealing with derived classes) became so widespread that when C with Classes was re-implemented with a “real” compiler (Cfront), assignment to this
remained valid in constructors and destructors even though this
had otherwise evolved into an immutable expression. The C++ front end I maintain still has modes that accept that anachronism. See also section 17 of the old Cfront manual found here, for some fun reminiscing.
When standardization of C++ began, the core language work was handled by three working groups: Core I dealt with declarative stuff, Core II dealt with expression stuff, and Core III dealt with “new stuff” (templates and exception handling, mostly). In this context, Core II had to (among many other tasks) formalize the rules for overload resolution and the binding of this
. Over time, they realized that that name binding should in fact be mostly like reference binding. Hence, in standard C++ the binding of something like:

In other words, the expression this
is now effectively a kind of alias for &__this
, where __this is just a name I made up for an unnamable implicit reference parameter.
C++11 further tweaked this by introducing syntax to control the kind of reference that this
is bound from. E.g.,
That model was relatively well-understood by the mid-to-late 1990s… but then unfortunately we forgot about it when we introduced lambda expression. Indeed, in C++11 we allowed lambda expressions to “capture” this
:
After that language feature was released, we started getting many reports of buggy programs that “captured” this
thinking they captured the class value, when instead they really wanted to capture __this
(or *this
). So we scrambled to try to rectify that in C++17, but because lambdas had gotten tremendously popular we had to make a compromise. Specifically:
- we introduced the ability to capture
*this
- we allowed
[=, this]
since now[this]
is really a “by reference” capture of*this
- even though
[this]
was now a “by reference” capture, we left in the ability to write[&, this]
, despite it being redundant (compatibility with earlier standards)
Our tale is not done, however. Once you write much generic C++ code you’ll probably find out that it’s really frustrating that the __this
parameter cannot be made generic because it’s implicitly declared. So we (the C++ standardization committee) decided to allow that parameter to be made explicit in C++23. For example, you can write (example from the linked paper):
In that example, the “object parameter” (i.e., the previously hidden reference parameter __this
) is now an explicit parameter and it is no longer a reference!
Here is another example (also from the paper):
Here:
- the type of the object parameter is a deducible template-dependent type
- the deduction actually allows a derived type to be found
This feature is tremendously powerful, and may well be the most significant addition by C++23 to the core language. If you’re reasonably well-versed in modern C++, I highly recommend reading that paper (P0847) — it’s fairly accessible.
It adds some extra steps in design, testing and deployment for sure. But it can buy you an easier path to scalability and an easier path to fault tolerance and live system upgrades.
It’s not REST itself that enables that. But if you use REST you will have split your code up into independently deployable chunks called services.
So more development work to do, yes, but you get something a single monolith can’t provide. If you need that, then the REST service approach is a quick way to doing it.
We must compare like for like in terms of results for questions like this.
Because at the time, there was likely no need.
Based on what I could find, the strtok
library function appeared in System III UNIX some time in 1980.
In 1980, memory was small, and programs were single threaded. I don’t know whether UNIX had any support for multiple processors, even. I think that happened a few years later.
Its implementation was quite simple.
This was 3 years before they started the standardization process, and 9 years before it was standardized in ANSI C.
This was simple and good enough, and that’s what mattered most. It’s far from the only library function with internal state.
And Lex/YACC took over more complex scanning and parsing tasks, so it probably didn’t get a lot of attention for the lightweight uses it was put to.
For a tongue-in-cheek take on how UNIX and C were developed, read this classic:
Because the ‘under the hood’ code is about 50 years old. I’m not kidding. I worked on some video poker machines that were made in the early 1970’s.
Here’s how they work.
You have an array of ‘cards’ from 0 to 51. Pick one at random. Slap it in position 1 and take it out of your array. Do the same for the next card … see how this works?
Video poker machines are really that simple. They literally simulate a deck of cards.
Anything else, at least in Nevada, is illegal. Let me rephrase that, it is ILLEGAL, in all caps.
If you were to try to make a video poker game (or video keno, or slot machine) in any other way than as close to truly random selection from an ‘array’ of options as you can get, Nevada Gaming will come after you so hard and fast, your third cousin twice removed will have their ears ring for a week.
That is if the Families don’t get you first, and they’re far less kind.
All the ‘magic’ is in the payout tables, which on video poker and keno are literally posted on every machine. If you can read them, you can figure out exactly what the payout odds are for any machine.
There’s also a little note at the bottom stating that the video poker machine you’re looking at uses a 52 card deck.
Comments:
1- I have a slot machine and the code on the odds chip looks much like an excel spread sheet every combination is displayed in this spread sheet, so the exact odds can be listed an payout tables. The machine picks a random number. Let say 452 in 1000. the computer looks at the spread sheet and says that this is the combination of bar bar 7 and you get 2 credits for this combination. The wheels will spin to match the indication on the spread sheet. If I go into the game diagnostics I can see if it is a win or not, you do not win on what the wheels display, but the actual number from the spread sheet. The games knows if you won or lost before the wheels stop.
2- I had a conversation with a guy who had retired from working in casino security. He was also responsible for some setup and maintenance on slot machines, video poker and others. I asked about the infamous video poker machine that a programmer at the manufacturer had put in a backdoor so he and a few pals could get money. That was just before he’d started but he knew how it was done. IIRC there was a 25 step process of combinations of coin drops and button presses to make the machine hit a royal flush to pay the jackpot.
Slot machines that have mechanical reels actually run very large virtual reels. The physical reels have position encoders so the electronics and software can select which symbol to stop on. This makes for far more possible combinations than relying on the space available on the physical reels.
Those islands of machines with the sign that says 95% payout? Well, you guess which machine in the group is set to that payout % while the rest are much closer to the minimum allowed.
Machines with a video screen that gives you a choice of things to select by touch or button press? It doesn’t matter what you select, the outcome is pre-determined. For example, if there’s a grid of spots and the first three matches you get determines how many free spins you get, if the code stopped on giving you 7 free spins, out of a possible maximum of 25, you’re getting 7 free spins no matter which spots you touch. It will tease you with a couple of 25s, a 10 or 15 or two, but ultimately you’ll get three 7s, and often the 3rd 25 will be close to the other two or right next to the last 7 “you” selected to make you feel like you just missed it when the full grid is briefly revealed.
There was a Discovery Channel show where the host used various power tools to literally hack things apart to show their insides and how they worked. In one episode he sawed open a couple of slot machines, one from the 1960’s and a purely mechanical one from the 1930’s or possibly 1940’s. In that old machine he discovered the casino it had been in decades prior had installed a cheat. There was a metal wedge bolted into the notch for the 7 on one reel so it could never hit the 777 jackpot. I wondered if the Nevada Gaming Commission could trace the serial number and if they could levy a fine if the company that had owned and operated it was still in business.
3- Slightly off-topic. I worked for a company that sold computer hardware, one of our customers was the company that makes gambling machines. They said that they spent close to $0 on software and all their budget on licensing characters
This question is like asking why you would ever use int
when you have the Integer
class. Java programmers seem especially zealous about everything needing to be wrapped, and wrapped, and wrapped.
Yes, ArrayList<Integer>
does everything that int[]
does and more… but sometimes all you need to do is swat a fly, and you just need a flyswatter, not a machine-gun.
Did you know that in order to convert int[]
to ArrrayList<Integer>
, the system has to go through the array elements one at a time and box them, which means creating a garbage-collected object on the heap (i.e. Integer
) for each individual int in the array? That’s right; if you just use int[]
, then only one memory alloc is needed, as opposed to one for each item.
I understand that most Java programmers don’t know about that, and the ones who do probably don’t care. They will say that this isn’t going to be the reason your program is running slowly. They will say that if you need to care about those kinds of optimizations, then you should be writing code in C++ rather than Java. Yadda yadda yadda, I’ve heard it all before. Personally though, I think that you should know, and should care, because it just seems wasteful to me. Why dynamically allocate n individual objects when you could just have a contiguous block in memory? I don’t like waste.
I also happen to know that if you have a blasé attitude about performance in general, then you’re apt to be the sort of programmer who unknowingly, unnecessarily writes four nested loops and then has no idea why their program took ten minutes to run even though the list was only 100 elements long. At that point, not even C++ will save you from your inefficiently written code. There’s a slippery slope here.
I believe that a software developer is a sort of craftsman. They should understand their craft, not only at the language level, but also how it works internally. They should convert int[]
to ArrayList<Integer>
only because they know the cost is insignificant, and they have a particular reason for doing so other than “I never use arrays, ArrayList is better LOL”.
Very similar, yes.
Both languages feature:
- Static typing
- nominative interface typing
- garbage collection
- class based
- single dispatch polymorphism
so whilst syntax differs, the key things that separate OO support across languages are the same.
There are differences but you can write the same design of OO program in either language and it won’t look out of place
Last time I needed to write an Android app, even though I already knew Java, I still went with Kotlin 😀
I’d rather work in a language I don’t know than… Java… and yes, I know a decent Java IDE can auto-generate this code – but this only solves the problem of writing the code, it doesn’t solve the problem of having to read it, which happens a lot more than writing it.
I mean, which of the below conveys the programmer’s intent more clearly, and which one would you rather read when you forget what a part of the program does and need a refresher:
Even if both of them required no effort to write… the Java version is pure brain poison…
If you have two books on the same subject, but one is skinny and the other is fat, go with the skinny one. For example:
The book on the left has 796 pages; the book on the right a mere 176. Yet the book on the right told us everything we needed to know to write our own, efficient, native-code-generating Plain English compiler in Plain English:
Compare also the Inside Macintosh documentation before and after the Pascal programmers were replaced with C programmers:
Note that the whole set (green arrow) documenting the slim and trim Pascal system was the same size as a single volume (red arrow) of the bloated C version.
Et voila!
Why is volatile not considered useful in multithreaded C or C++ programming?
Because it’s insufficient to deal with the memory semantics of current computers. In fact, it was obsolete almost as soon as it first became available.
Volatile tells a compiler that it may not assume the value of a memory location has not changed between reads or writes. This is sometimes sufficient to deal with memory-mapped hardware registers, which is what it was originally for.
But that doesn’t deal with the semantics of a multiprocessor machine’s cache, where a memory location might be written and read from several different places, and we need to be sure we know when written values will be observable relative to control flow in the writing thread.
Instead, we need to deal with acquire/release semantics of values, and the compilers have to output the right machine instructions that we get those semantics from the real machines. So, the atomic memory intrinsics come to the rescue. This is also why inline assembler acts as an optimization barrier; before there were intrinsics for this, it was done with inline assembler. But intrinsics are better, because the compiler can still do some optimization with them.
C++ is a programming language specified through a standard that is “abstract” in various ways. For example, that standard doesn’t currently formally recognize a notion of “runtime” (I would actually like to change that a little bit in the future, but we’ll see).
Now, in order to allow implementations to make assumptions it removes certain situations from the responsibility of the implementation. For example, it doesn’t require (in general) that the implementation ensure that accesses to objects are within the bounds of those objects. By dropping that requirement, the code for valid accesses can be more efficient than would be required if out-of-bounds situations were the responsibility of the implementation (as is the case in most other modern programming languages). Those “situations” are what we call “undefined behaviour”: The implementation has no specific responsibilities and so the standard allows “anything” to happen. This is in part why C++ is still very successful in applications that call for the efficient use of hardware resources.
Note, however, that the standard doesn’t disallow an implementation from doing something that is implementation-specified in those “undefined behaviour” situations. It’s perfectly all right (and feasible) for a C++ implementation to be “memory safe” for example (e.g., not attempt access outside of object bounds). Such implementations have existed in the past (and might still exist, but I’m not currently aware of one that completely “contains” undefined behaviour).
ADDENDUM (July 16th, 2021):
The following article about undefined behavior crossed my metaphorical desk today:
- It’s Time to Rethink the AWS Free Tierby /u/catman0x75 (programming) on June 29, 2022 at 12:04 am
submitted by /u/catman0x75 [link] [comments]
- How With the Help of the Regression Model Cryptocurrency Prices Can Be Predicted?by Subhash Das (Coding on Medium) on June 29, 2022 at 12:02 am
How to find out the price prediction of cryptocurrency on various time intervals using the regression modelContinue reading on Towards AI »
- How With the Help of the Regression Model Cryptocurrency Prices Can Be Predicted?by Subhash Das (Programming on Medium) on June 29, 2022 at 12:02 am
How to find out the price prediction of cryptocurrency on various time intervals using the regression modelContinue reading on Towards AI »
- How to explain almost any machine learning model using OFAT (One Factor/Feature A Time)by Stanghong (Python on Medium) on June 28, 2022 at 11:49 pm
One Factor a time (OFAT) is one flavor of the experimental design approach, which has been widely used in sensitivity studies or lab tests.Continue reading on Medium »
- Improve low image quality!by MLBoy (Python on Medium) on June 28, 2022 at 11:41 pm
We want to improve low quality image!Continue reading on Medium »
- Diário de Bordo — Viagens Concluídasby Lethycia (Python on Medium) on June 28, 2022 at 11:38 pm
Bom, na semana após o feriado eu aproveitei a minha licença premium no LinkedIn — que ganhei por ser universitária, para poder fazer mais…Continue reading on Medium »
- Flutter’s Best Repository Service Architecture Techniqueby Rahul Sharma (Programming on Medium) on June 28, 2022 at 11:32 pm
Repository Architecture Technique is a powerful approach that I have used personally and would like to share with you all.Continue reading on Medium »
- Understanding e^ix = cos(x) + i*sin(x) in the context of infinite sums.by Saurabh Jayaram (Python on Medium) on June 28, 2022 at 11:26 pm
Check out this video about Euler’s formula that I programmed using python and the library manim. Everything in this video was done by me…Continue reading on Medium »
- Why Python Is The Best Programming Languageby Manpreet Singh (Programming on Medium) on June 28, 2022 at 11:22 pm
Welcome back! Python is an awesome programming language with a ton of capability, if you’re new to Python, check out the link below to…Continue reading on Medium »
- Why Python Is The Best Programming Languageby Manpreet Singh (Coding on Medium) on June 28, 2022 at 11:22 pm
Welcome back! Python is an awesome programming language with a ton of capability, if you’re new to Python, check out the link below to…Continue reading on Medium »
- Greedy algorithm — Unravelledby Amit Singh Rathore (Programming on Medium) on June 28, 2022 at 11:18 pm
Make the best choice at the momentContinue reading on Medium »
- The Important Thing to Do After Your Code is “Working”by Allan Siongco (Programming on Medium) on June 28, 2022 at 11:14 pm
Have you ever received a gigantic blob of text message or email from someone with no paragraphs at all?Continue reading on Medium »
- The Important Thing to Do After Your Code is “Working”by Allan Siongco (Coding on Medium) on June 28, 2022 at 11:14 pm
Have you ever received a gigantic blob of text message or email from someone with no paragraphs at all?Continue reading on Medium »
- Thinking Shiftby real Jema (Programming on Medium) on June 28, 2022 at 11:12 pm
When I was in school I could identify some problems in my community and how I could work to fix them (with technology) and as years went…Continue reading on Medium »
- How to Work With Secrets in Google Cloud Platform (GCP)by Lynn Kwong (Python on Medium) on June 28, 2022 at 11:11 pm
Learn a better way to manage your sensitive dataContinue reading on Level Up Coding »
- How to learn Web Design with a full-time job.by Justin Power (Coding on Medium) on June 28, 2022 at 11:01 pm
“The pen that writes your life story must be held in your own hand.”Continue reading on Medium »
- Introduction to Pythonby Roman Paolucci (Programming on Medium) on June 28, 2022 at 10:55 pm
divitiae et educationeContinue reading on Quant Guild »
- Introduction to Pythonby Roman Paolucci (Python on Medium) on June 28, 2022 at 10:55 pm
divitiae et educationeContinue reading on Quant Guild »
- nebullvm, the open-source AI inference accelerator, gets faster and easier to use - New Releaseby /u/emilec___ (programming) on June 28, 2022 at 10:55 pm
submitted by /u/emilec___ [link] [comments]
- Easy File Sharing with transfer.shby Luke Gloege, Ph.D. (Programming on Medium) on June 28, 2022 at 10:30 pm
Quickly send someone a small file download link with transferContinue reading on Medium »
- The Best Code Editors You Need To Use In 2022by Manpreet Singh (Programming on Medium) on June 28, 2022 at 10:22 pm
Welcome back! Coding is one of the best skillsets to have nowadays, so, let’s take a look at some of the best code editors you have to use…Continue reading on Medium »
- The Best Code Editors You Need To Use In 2022by Manpreet Singh (Coding on Medium) on June 28, 2022 at 10:22 pm
Welcome back! Coding is one of the best skillsets to have nowadays, so, let’s take a look at some of the best code editors you have to use…Continue reading on Medium »
- 7 essential questions to ask before joining a coding bootcampby The Educative Team (Coding on Medium) on June 28, 2022 at 10:19 pm
Coding bootcamps have exploded in popularity, and thanks to these intense programs, countless individuals have fast-tracked their way to a…Continue reading on Dev Learning Daily »
- We built a spreadsheet engine from scratch. Here’s what we learned.by Hjalmar Gislason (Programming on Medium) on June 28, 2022 at 10:18 pm
From the very beginning, one of the core ideas behind GRID has been that spreadsheets — and spreadsheet models in particular — can be made…Continue reading on GRID — the friendly data tool for modern teams »
- 7 Uses of grep Commands in Linuxby /u/yangzhou1993 (programming) on June 28, 2022 at 10:13 pm
submitted by /u/yangzhou1993 [link] [comments]
- 5 Ways To Break Out of Nested Loops in Pythonby /u/yangzhou1993 (programming) on June 28, 2022 at 10:12 pm
submitted by /u/yangzhou1993 [link] [comments]
- Python Programming Language: How to Loop Through Listsby Jesse L (Python on Medium) on June 28, 2022 at 10:04 pm
Hi everyone, welcome back. In these examples, we will be going over how to loop through Lists in Python. Lists can be used to store a…Continue reading on Medium »
- How to Succeed in Computer Science?by Dino Cajic (Coding on Medium) on June 28, 2022 at 9:58 pm
It doesn’t matter which degree you’re pursuing, the last thing that you want to happen is to feel like you wasted your money.Continue reading on Geek Culture »
- Shell command output to variables using pythonby Kodwings (Python on Medium) on June 28, 2022 at 9:52 pm
IntroductionContinue reading on Medium »
- Python — List — Only the essentialsby Jliezed (Coding on Medium) on June 28, 2022 at 9:51 pm
Hello World, I’ve recently quit my job to pursue a journey to the coding world! I will share my notes with the essentials notions that I…Continue reading on Medium »
- Python — List — Only the essentialsby Jliezed (Python on Medium) on June 28, 2022 at 9:51 pm
Hello World, I’ve recently quit my job to pursue a journey to the coding world! I will share my notes with the essentials notions that I…Continue reading on Medium »
- Home voltage monitor modbus-rtu, raspberry pi & splunkby TechBitz (Python on Medium) on June 28, 2022 at 9:49 pm
Our power company here in Brasil has consistency issues at times the power went from 210 vac to about 100 then slowly dropped to nothing…Continue reading on Medium »
- Minha História Como Estudante de Programaçãoby BrantLauro (Coding on Medium) on June 28, 2022 at 9:47 pm
De celular de Papel à JavaScriptContinue reading on Medium »
- How to Write a SpongeBob Mocking Converter in Pythonby Martin Andersson Aaberge (Coding on Medium) on June 28, 2022 at 9:35 pm
You never know when you need to get the point acrossContinue reading on Better Programming »
- Clean Code for JavaScriptby /u/ezsou (programming) on June 28, 2022 at 9:25 pm
submitted by /u/ezsou [link] [comments]
- GitGoat: Misconfigured GitHub Organization (Open Source)by /u/mumbalakumbala (programming) on June 28, 2022 at 9:08 pm
submitted by /u/mumbalakumbala [link] [comments]
- Learning Containers From The Bottom Up - Ivan Velichkoby /u/whackri (programming) on June 28, 2022 at 8:53 pm
submitted by /u/whackri [link] [comments]
- GitHub - viraptor/reverse-interview: Questions to ask the company during your interviewby /u/omko (programming) on June 28, 2022 at 7:39 pm
submitted by /u/omko [link] [comments]
- Throwaway development environments with Nixby /u/beyang (programming) on June 28, 2022 at 7:02 pm
submitted by /u/beyang [link] [comments]
- Vim 9 has been releasedby /u/mateusnr (programming) on June 28, 2022 at 6:25 pm
submitted by /u/mateusnr [link] [comments]
- "The Mind Machines", NOVA episode from 1978 about AIby /u/zeroone (programming) on June 28, 2022 at 6:02 pm
submitted by /u/zeroone [link] [comments]
- Hexyzland - creative coding with hexagonsby /u/pto1 (programming) on June 28, 2022 at 5:41 pm
submitted by /u/pto1 [link] [comments]
- Haskell in Production: Channableby /u/iamkeyur (programming) on June 28, 2022 at 5:13 pm
submitted by /u/iamkeyur [link] [comments]
- How to Build a Merkle Tree Databaseby /u/SpareWatercress (programming) on June 28, 2022 at 5:09 pm
submitted by /u/SpareWatercress [link] [comments]
- This copilot is stupid and wants to kill meby /u/speckz (programming) on June 28, 2022 at 4:33 pm
submitted by /u/speckz [link] [comments]
- Fresh is a new full stack web framework for Denoby /u/iamkeyur (programming) on June 28, 2022 at 4:21 pm
submitted by /u/iamkeyur [link] [comments]
- Lingo: A Go micro language framework for building Domain Specific Languagesby /u/omko (programming) on June 28, 2022 at 4:13 pm
submitted by /u/omko [link] [comments]
- Teenagers’ 30 minute guide to writing in Assemblyby /u/have-a-greatday (programming) on June 28, 2022 at 4:00 pm
submitted by /u/have-a-greatday [link] [comments]
- Apple’s claim is that it bans other browsers for securityby /u/mtomweb (programming) on June 28, 2022 at 11:06 am
submitted by /u/mtomweb [link] [comments]
- Rolling your own cryptography: Build AES from scratch (and then never use it for anything of consequence)by /u/FrancisStokes (programming) on June 28, 2022 at 10:32 am
submitted by /u/FrancisStokes [link] [comments]
- Elligator: Hiding cryptographic key exchange as random noiseby /u/loup-vaillant (programming) on June 28, 2022 at 9:55 am
submitted by /u/loup-vaillant [link] [comments]
- Google looks to reduce pushback bias in developers' software code reviewby /u/lelanthran (programming) on June 28, 2022 at 6:16 am
submitted by /u/lelanthran [link] [comments]
- What makes a good/bad commit (message)by /u/gajus0 (programming) on June 27, 2022 at 7:57 pm
submitted by /u/gajus0 [link] [comments]
- Vandelay: An Importer/Exporter for BigTable instance schemas and row databy /u/cr0_ (programming) on June 27, 2022 at 6:05 pm
submitted by /u/cr0_ [link] [comments]
- PRQL 0.2 — a modern language for transforming data — a simple, powerful, pipelined SQL replacement. Now ready to use!by /u/max-aug (programming) on June 27, 2022 at 5:17 pm
submitted by /u/max-aug [link] [comments]
A Twitter List by enoumen
How does a database handle pagination?
How does a database handle pagination?

It doesn’t. First, a database is a collection of related data, so I assume you mean DBMS or database language.
Second, pagination is generally a function of the front-end and/or middleware, not the database layer.
But some database languages provide helpful facilities that aide in implementing pagination. For example, many SQL dialects provide LIMIT and OFFSET clauses that can be used to emit up to n rows starting at a given row number. I.e., a “page” of rows. If the query results are sorted via ORDER BY and are generally unchanged between successive invocations, then that can be used to implement pagination.
That may not be the most efficient or effective implementation, though.

So how do you propose pagination should be done?
On context of web apps , let’s say there are 100 mn users. One cannot dump all the users in response.
Cache database query results in the middleware layer using Redis or similar and serve out pages of rows from that.
What if you have 30, 000 rows plus, do you fetch all of that from the database and cache in Redis?
I feel the most efficient solution is still offset and limit. It doesn’t make sense to use a database and then end up putting all of your data in Redis especially data that changes a lot. Redis is not for storing all of your data.
If you have large data set, you should use offset and limit, getting only what is needed from the database into main memory (and maybe caching those in Redis) at any point in time is very efficient.
With 30,000 rows in a table, if offset/limit is the only viable or appropriate restriction, then that’s sometimes the way to go.
More often, there’s a much better way of restricting 30,000 rows via some search criteria that significantly reduces the displayed volume of rows — ideally to a single page or a few pages (which are appropriate to cache in Redis.)
It’s unlikely (though it does happen) that users really want to casually browse 30,000 rows, page by page. More often, they want this one record, or these small number of records.
Question: This is a general question that applies to MySQL, Oracle DB or whatever else might be out there.
I know for MySQL there is LIMIT offset,size; and for Oracle there is ‘ROW_NUMBER’ or something like that.
But when such ‘paginated’ queries are called back to back, does the database engine actually do the entire ‘select’ all over again and then retrieve a different subset of results each time? Or does it do the overall fetching of results only once, keeps the results in memory or something, and then serves subsets of results from it for subsequent queries based on offset and size?
If it does the full fetch every time, then it seems quite inefficient.
If it does full fetch only once, it must be ‘storing’ the query somewhere somehow, so that the next time that query comes in, it knows that it has already fetched all the data and just needs to extract next page from it. In that case, how will the database engine handle multiple threads? Two threads executing the same query?
something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
Answer: First of all, do not make assumptions in advance whether something will be quick or slow without taking measurements, and complicate the code in advance to download 12 pages at once and cache them because “it seems to me that it will be faster”.
YAGNI principle – the programmer should not add functionality until deemed necessary.
Do it in the simplest way (ordinary pagination of one page), measure how it works on production, if it is slow, then try a different method, if the speed is satisfactory, leave it as it is.
From my own practice – an application that retrieves data from a table containing about 80,000 records, the main table is joined with 4-5 additional lookup tables, the whole query is paginated, about 25-30 records per page, about 2500-3000 pages in total. Database is Oracle 12c, there are indexes on a few columns, queries are generated by Hibernate. Measurements on production system at the server side show that an average time (median – 50% percentile) of retrieving one page is about 300 ms. 95% percentile is less than 800 ms – this means that 95% of requests for retrieving a single page is less that 800ms, when we add a transfer time from the server to the user and a rendering time of about 0.5-1 seconds, the total time is less than 2 seconds. That’s enough, users are happy.
And some theory – see this answer to know what is purpose of Pagination pattern
- How to use data.table in Rby Arindam Basu (Database on Medium) on June 28, 2022 at 11:18 pm
The package data.table is a great package for handling data frames in R.Continue reading on Medium »
- Transferências mais recentes — 28/06/2022by Tudo pelo Futebol (Database on Medium) on June 28, 2022 at 10:48 pm
Continue reading on Medium »
- Últimas Transferencias — 28/06/2022by Todo por el Fútbol (Database on Medium) on June 28, 2022 at 10:48 pm
Continue reading on Medium »
- Latest Transfers — 06/28/2022by Everything for Football (Database on Medium) on June 28, 2022 at 10:48 pm
Continue reading on Medium »
- ActiveRecord Bulk Change change_table :bulk => trueby Muhammad Umair (Database on Medium) on June 28, 2022 at 8:16 pm
Every database operation has a cost. Normally when we have to do multiple changes in a table, lets say add some new columns & an index, we…Continue reading on Medium »
- Últimos Jogadores Atualizados — 28/06/2022by Tudo pelo Futebol (Database on Medium) on June 28, 2022 at 8:14 pm
Continue reading on Medium »
- Últimos Jugadores Actualizados — 28/06/2022by Todo por el Fútbol (Database on Medium) on June 28, 2022 at 8:14 pm
Continue reading on Medium »
- Last Updated Players — 06/28/2022by Everything for Football (Database on Medium) on June 28, 2022 at 8:14 pm
Continue reading on Medium »
- PL/SQL — Comprehensive Introductionby Yamika Perera (Database on Medium) on June 28, 2022 at 8:14 pm
What is PL/SQL?Continue reading on Medium »
- Room Database in Kotlin — Beginner In-Depth Guide (1)by Reyhaneh Ezatpanah (Database on Medium) on June 28, 2022 at 6:28 pm
I will explain to you , why and how ,we can use Room Database in Kotlin in this series.Continue reading on Medium »
- Database for real-time data with filter/sort/and query functionalityby /u/shini-chan (Database) on June 28, 2022 at 1:33 pm
I have price data that is changing rapidly real-time. I'm using firebase realtime database atm, but I've got stuck trying to filter/sort the data as multiple "AND" operators for queries are not simple to implement. There are workarounds but I think its worth considering a more suitable database (considering its quite early in the project) instead. I have a script file which reads a stream data from a bunch of API's and then after formatting writes/updates the data to the database frequently. My nextjs website uses firebase to read and presents that information to customers. Additionally, it generates a linked list of specific data points which is often re-computed as the data changes. In the worst case this linked list is generated by making a comparison with every data point in the database. (I have yet to implement a way to pass that linked list onto the database; I'm not sure if I should use cloud functions or just send the data the same way). My priority is speed (getting the information to my website as fast as possible) and cost. May initially make database updates at a rate of 1/min. But would like for it to be able to withstand milestones of 20/min, 100/min, 1000+/min as I start to monitor seperate collections (that would be independent of one another) submitted by /u/shini-chan [link] [comments]
- Cloud native deployment with Helm Chart, SQL dialect translation and new API driver boost ShardingSphere's data gateway capabilityby /u/y2so (Database) on June 28, 2022 at 8:57 am
submitted by /u/y2so [link] [comments]
- I have a table of funds. Each fund has a benchmark. Some benchmarks are primary. Others are peer benchmarks. I have two fields in my fund table representing these two things. My benchmarks table has all of the primary and peer benchmarks together. What’s the correct way to build this schema?by /u/bitbyt3bit (Database) on June 27, 2022 at 11:20 pm
submitted by /u/bitbyt3bit [link] [comments]
- Database suggestions for storing structural engineering databy /u/Birdynam98 (Database) on June 27, 2022 at 7:50 am
Is there a database that can help in saving data of a structural engineering system? The program should be able to do the following, NoSQL Git functionality: merge/push/pull/commit/branching etc. Python scripting The system may be a mooring system or lets say a house, where the data is created as system=> beams=>support beams=>crossection/material/tverrsnittsdata (as an example). Do you guys know a database that could suit this purpose? EDIT: The chosen tool does not actually need to be NoSQL, it is far more important that the tool we choose are able to do version control and Git-functionality. If such at tool exists so that this technology gap is bridged using SQL it is perfectly fine. submitted by /u/Birdynam98 [link] [comments]
- Database Adviceby /u/unbutter_robot (Database) on June 26, 2022 at 10:26 pm
Need advice on database design for friend's lab Experiments usually consist of 1000 animals with experiments performed monthly over 3 years. Currently all data is entered into excels or cvs (~20 total) What is the best database design that can import data from all excels and allow a user friendly filterable front-end for easy queries? (not expecting researchers to know sql) Example query: pull 400 animals that are male, blood type X, and fur color Y with 50 different variables each Would mongo db, flask, and react be a good stack? This would be on a local machine or something simpler like MySQL with a JS frontend? submitted by /u/unbutter_robot [link] [comments]
- Can a 40GB MySQL table run queries without using large server resources?by /u/DropBears12 (Database) on June 26, 2022 at 10:11 pm
I wanted to get a hypothetical answer. Lets say you have 2 tables, Customer (20GB), Transaction (40GB) and ITEM (10GB). You have a query such as: SELECT C.CUS_ID, C.CUS_NAME, I.ITEM_NAME, T.AMOUNT, T.TRANSACTION_DATE, FROM CUSTOMER C LEFT JOIN TRANSACTION T ON C.CUS_ID = T.CUS_ID LEFT JOIN ITEM I ON T.ITEM_ID = I.ITEM_ID WHERE C.CUS_NAME = 'Bob' AND T.TRANSACTION_DATE > '2012-01-01' ORDER BY ITEM_ID ASC, T.TRANSACTION_DATE DESC LIMIT 15 You also have the indexed everything to make this query run fast (at most 0.001 seconds run time). What are the server resource consequences of running this query but interchanging the C.CUS_NAME = 'Bob' with another name. For example due to the indexing, that takes up more storage. However, does this affect the amount of RAM or CPU used? Meaning while this query is running (or not running), it could affect other processes on the server? submitted by /u/DropBears12 [link] [comments]
- SchemaDB or any alternative?by /u/AccomplishedLet5782 (Database) on June 26, 2022 at 8:33 pm
For educational purposes, I'm looking for a GUI SQL-tool, that support ER-model schemas. SchemaDB looks really well, but I'm researching for possibilities. Are there any alternatives that makes more sense for a professional career? I will use it concurrent with HeidiSQL. submitted by /u/AccomplishedLet5782 [link] [comments]
- Inheritance in PostgreSQL (not sure about the title)by /u/DowntownLength2973 (Database) on June 26, 2022 at 2:45 pm
Hey, in my application I have two types of users "Student" and "Teacher", and both can post a publication, so the "publication" table must have a foreign key to "Student" or "Teacher" how can I implement that? submitted by /u/DowntownLength2973 [link] [comments]
- Need a database that can hold 16 million records and export any 2000 non-sequential records to Excel within 10 seconds.by /u/privacythrowaway820 (Database) on June 26, 2022 at 12:51 am
I'll be doing this over and over again so it doesn't need to just happen once. What is the best database manager to handle this? Is Power Query the best way to query the records to Excel? Edit: Let me explain a bit more about what I am trying to do: Basically I’m using my own formulas in Excel to generate the 2000 primary keys that i am looking up records for. I then want to return those records to excel for calculation purposes. Would Power Query properly linked to an SQL database accomplish this? submitted by /u/privacythrowaway820 [link] [comments]
- Recommendations for C++ API database.by /u/thracian_warrior (Database) on June 25, 2022 at 12:46 pm
I am c++ developer, who is new to databases. I want to store versioned copy of many csv file in a database, which ideally should be file-backed to allow for crash recovery. I want to query the difference across versions, when I push a new version of file to DB. There could be as many as 1000 csv files each of roughly ~20 MB size. Any-suggestions of what all github repos, technology should I explore. Preference would the the database provides c++ api's, so that I can plugin it into my existing application. If C++ is a strict no to handle databases, then what language would you suggest. submitted by /u/thracian_warrior [link] [comments]
- Is there an OLTP database engine that versions all sequential states of the database (similar to git) and provides efficient sub-second operations for looking up records at any of those states?by /u/_beos_ (Database) on June 25, 2022 at 6:57 am
If you look at git as a database, and look at commits as transactional units of work involving multiple INSERT/UPDATE/DELETE operations, then Git is a database in which you can query its complete state at any given point in time. For example, you can say: SHOW ME the 10th line of file src/example.js when commit_number = 1000 We can order commits by date, find the 1000th commit, and see what was the 10th line (row) of the src/example.js file (table). So we can argue that git as a database has global/entire-database-level versioning. In RDBMS world, at least the databases that I know, this level of versioning is at snapshot granularity. For example, you can't run queries like this: SELECT * from users where id = 1 and $global_database_commit_number = 1000 meaning, show me a user that had id 1 when the 1000th database transaction was committed. Do you know of any such databases that are as scalable as databases such as Postgres, MySQL, etc? Maybe blockchain is such a databases, but transactions there are expensive and we don't have tables or table like structures on them anyway. submitted by /u/_beos_ [link] [comments]
- Hello guys, I have a query in adw that is azure sql syntax -select *,percentile_cont(0.3) within group (order by gmv*1.0/10000)(partition by article_type,gender,mrp_bucket) from table ,now I need this equivalent in presto sql,I didnt find any function similar to percentile_cont .by /u/Ok-Career-8761 (Database) on June 24, 2022 at 7:49 pm
The function percentile_cont(0.3) adds an extra column with the 30th percentile gmv*1.0/10000 ,so the extra column would contain same vaule for that partition,ie for particular group,ie here it is article_type,gender,mrp_bucket group,So I need this equivalent in presto sql submitted by /u/Ok-Career-8761 [link] [comments]
- I posted here a couple of days ago asking if it is possible to import a CSV file with 300 million records to a MYSQL databaseby /u/Bluesky4meandu (Database) on June 24, 2022 at 5:53 pm
The reason why I was asking is because I work with huge files and MYSQL chocked when I was trying to even open a 1 GB file a couple of weeks ago. I have just discovered a text editor software called emeditor that can open files up to 16 TB big. I have found what I am looking for, I think it is a Japanese company but they have the software localized in English version. I downloaded the free version and I am playing around with it. submitted by /u/Bluesky4meandu [link] [comments]
- Cloud Storage (for MATLAB) to store Stl. files?by /u/Puzzleheaded-Beat-42 (Database) on June 24, 2022 at 3:04 am
Where can I find I cloud storage that is compatible with MATLAB for hundreds of stl. files? and the most important thing, how can I store those stl. files, do I need to convert them to a specific format? I'm not an expert of databases or anything. Thank you, submitted by /u/Puzzleheaded-Beat-42 [link] [comments]
- What happened to Database Answers?by /u/iAmLondonDev (Database) on June 24, 2022 at 12:04 am
I've just had a look at http://www.databaseanswers.org/ recently after a very long time since I last visited, turns out the site is down? Is this temporary out permanent? submitted by /u/iAmLondonDev [link] [comments]
- Date y-d-m in mariadbby /u/Darxploit (Database) on June 23, 2022 at 7:14 pm
Is it possible to create a date attribute for a table with a format like y-d-m in mariadb? I read that it only supports yyyy-mm-dd, but I got a task from my university to explicitly use y-d-m to store date values.. submitted by /u/Darxploit [link] [comments]
- The Beauty of HTAP: TiDB and AlloyDB as Examplesby /u/ngaut (Database) on June 23, 2022 at 3:34 pm
submitted by /u/ngaut [link] [comments]
- Automate Excel Data Extraction to MySQL with Apache NiFiby /u/InsightByte (Database) on June 23, 2022 at 10:35 am
submitted by /u/InsightByte [link] [comments]
- Best Practice on Storing Objects Composed of Objects (Postgres)by /u/sjflnjpitt (Database) on June 22, 2022 at 5:15 pm
I'm working on a project relying on a parent object composed of a list of child objects. Something like: type Parent { id int64 name string children []Child } type Child { id int64 name string stats []int parent_id int64 ... } From the user's perspective, you'd create a Parent and iteratively add Child objects to it. My first schema idea is to have one table for each. In other words, a Child table containing all Childs and a Parent table containing all Parents. To relate the two, I'd use parent_id as a foreign key and do something like: SELECT * FROM child_table WHERE parent_id = '{Parent.id}' I'm also aware that Postgres supports the storage of serialized objects, but in that case I'm worried about losing the ability to filter on Child.stats. Are there more efficient techniques or some best practice for what I'm trying to achieve here? submitted by /u/sjflnjpitt [link] [comments]
- Problem with dBASE queryby /u/azra1l (Database) on June 22, 2022 at 4:56 pm
I am trying to pull data from our inhouse shift shedule database via powershell, using the Microsoft.ACE.OLEDB.12.0 provider. It is apparently a dBASE database. I fear this is a rather complicated situation to explain, i hope this is somewhat comprehensible. I am able to run queries and get results, but some fields have wierd content. There are two fields, containing start and end times for every shift on every week day, fore- and afternoon. Their content looks like this: https://preview.redd.it/mlzs7fw8x6791.png?width=696&format=png&auto=webp&s=b3aceb566d64c1143d5b3886006f822e3619f5eb Let's ignore the fact that this whole thing is a database design desaster. By trial and error i found out that "start" and "end" columns contain the time values encoded in byte format, every cell containing 5 pairs of values for each weekday as in monday-friday, pairs as in forenoon;afternoon, stored as a string. the fields are defined as character with 128 length. I managed to convert a pair of those values for one weekday into the correct value of hours and minutes by some obscure formula. But i only ever get one of the 5 values to parse. the query shown above was made via dbschema, a free database client compatible with dBASE. when i parse the database via powershell, it only brings back a pair of values for the first weekday: shortname : D1 start : �� end : -_ shortname : D2 start : -� end : -- shortname : D3 start : _* end : �n shortname : D5 start : +W end : *8 shortname : H1 start : �_ end : -� shortname : H2 tart : � nd : __ shortname : H3 start : _* end : �n shortname : H5 start : +W end : *8 This is the connection string i use: Provider=Microsoft.ACE.OLEDB.12.0; Data Source=<PathToFolder>; Extended Properties=dBASE III; Btw, the exact same error persists if i connect via Visual FoxPro OLE DB Provider. Will i need adition parameters in my connectionstring, for like character encoding? I searched the net up and down for the better part of the day, but found nothing regarding my problem 🙁 submitted by /u/azra1l [link] [comments]
- How We Fixed Long-Running PostgreSQL now( ) Queries (and Made Them Lightning Fast)by /u/LoriPock (Database) on June 22, 2022 at 1:49 pm
submitted by /u/LoriPock [link] [comments]
- Help with Table Structure / Normalisation?by /u/tits_for_all (Database) on June 22, 2022 at 9:41 am
Ok, so this might be a little confusing to explain but I will try my best. We manufacture a product which takes in 4 categories of raw materials. Say Raw Material A, Raw Material B, Raw Material C, Raw Material D. Each category of raw material has different variants available such as 100, 101, 102…and so on. Most products will use multiple variants of multiple categories of raw materials. So a typical product will be made such as: Raw Material A 25% - {subdivision of this – > } ( 101 - 20%, 102 - 80%) Raw Material B 50% - {subdivision of this – > } ( 101 - 50%, 102 - 50%) Raw Material C 25% - {subdivision of this – > } ( 101 - 33%, 102 - 33%, 103 - 33%) I have 4 Tables - one for each raw material category. Now when the product is being built, I have a page which shows the ideal consumption for each variant of each category. During production, raw materials are not issued at one go. They are typically issued between 3 to 5 times. Now I have managed to build appropriate pages and tables for everything above but I am confused about best practice aspect for one particular thing and that is where I am hoping for some input. When we issue raw material, I am storing them in Raw_Material_Issue and Raw_Material_Issue_Line_Item tables. In Raw_Material_Issue tables all I am doing is saving the product_batch_Number , date and reference Raw_material_Issue_line_item. In Raw_material_Issue_line_item I am confused how to link them to the tables for the raw materials. Because if I have 4 relations with each of the raw material table then in every line item entry 3 columns will remain empty and I am sure this will cause problems in lookups later on. Shall I just put in column called Category which stores the Category of raw material as a text and a colum called ID which stores the record id as Text which I can later use to find from the relevant table or is there a better way to do this? Please let me know if my problem is not clear and I will try to rephrase it. Thanks for your help P.S. - I am doing this on a no-code platform Appgyver and using Airtable as my backend. This is a MVP build for now and I plan to migrate to Xano once I get the MVP working perfectly. LINE ITEM TABLE RAW MATERIAL TABLE App Page The four categories of Raw Materials are "Yarn", "Tharra", "Lachchi" & "Gola". They each have their own tables and the variants are in those tables. Now on the app page, I would like to display, date-wise, how much quantity of each item has been issued. But I am unable to do this lookup and this makes me think that I am not doing it correctly. The way I am trying to do it currently is I have simply pushed to the Line Item table (Loom_Issues_Line_Item) all the ID's of the variants and another column contains the name of the Item Category. All these records are then pushed to the Raw Material Issue Table (Loom_Issue) along with the date. submitted by /u/tits_for_all [link] [comments]
- Looking for a frontend program for the databaseby /u/VictoR18_ (Database) on June 22, 2022 at 9:12 am
In my company we are migrating an Access database to another written in MySQL. I have the knowledge to write and design the database but I don't know how to create a good user interface for it. Is there any tool that can be used as a database client or do I have to write a frontend program as well? Thanks. submitted by /u/VictoR18_ [link] [comments]
- Zero Downtime Deployment with a Databaseby /u/ranjeettechnincal (Database) on June 22, 2022 at 8:32 am
submitted by /u/ranjeettechnincal [link] [comments]
- Build a Better GitHub Insight Tool in a Week? A True Storyby /u/ngaut (Database) on June 22, 2022 at 6:01 am
submitted by /u/ngaut [link] [comments]