Answer by Lee Jenkinson:
Be honest, tell the truth, or say nothing at all. Constant lying requires an excellent memory and is ultimately self defeating. Cheating only diminishes you as a person.
Read “Desiderata” by Max Ehrmann.
Make eye contact.
Live the Golden Rule.
Don’t be afraid to take chances and make mistakes.
Let go of anger and hatred, it is corrosive.
Forgive, but never forget.
Answer by Saurabh Gaur:
Answer by Alexander Gugel:
- Single-threaded: Node.js is single-threaded. You can take advantage of multiple CPUs, but in general everything is designed to use the Event-Loop in order to achieve extraordinary performance. This can also be an advantage, since e.g write conflicts on files aren’t that relevant.
- Event Loop: The Event Loop is the core of Node.js and it’s a genius idea. But: Don’t use Node.js for blocking, CPU-intensive tasks. Node.js is not suited for stuff like that. Node.js is suited for I/O stuff (like web servers).
Don’t use Node.js for CPU-intensive tasks.
Node.js rocks for servers.
Node.js certainly has some disadvantages, but it is currently one of the best tools out there in order to create asynchronous, non-blocking apps. It’s great.
Yahoo Betting on Apache Hive, Tez, and YARN -
by The Hadoop Platforms Team
Low-latency SQL queries, Business Intelligence (BI), and Data Discovery on Big Data are some of the hottest topics these days in the industry with a range of solutions coming to life lately to address them as either proprietary or open-source implementations on…
Answer by Abhijit Agarwal:
A script to get rank and fees information about CS master’s courses in Europe
I was looking into the idea of pursuing my masters in Computer Science from Europe. I found a where you could view master courses in different subjects and countries all over Europe.
But the fees is different for students within the EU and students outside of the EU, and the latter was not always included. Moreover, there was no way to see how good a college is considered, because there was no ranking on the page.
So I wrote a script to scrape the results on the website and find it’s fees for outsiders and the and rankings of the University and write all this data into a CSV file.
This script also made the use of Wolfram Alpha API and the Google Search engine. I used BeautifulSoup library for scraping and LXML library for parsing XML outputs by
Here is a sample output for the keyword “Graphic Design”
It is definitely not my best script but a problem that interested me a lot at the time.
You can look at the code at
Please note that even though the code works last time I checked, it is nowhere near what we call “Good Quality” code and I request that you not judge me by how messy it is. :)
You are welcome to contribute to the development of this code and if you do it independently, send me a link when you are done maybe?
Answer by Charudutt Wasnikar:
2. Delete unwanted mails / folders on desk
3. Have a Beginning of Day List
4. Have a End of the Day List : Unless you wont have BOD and EOD list, agenda wont be set
5. Have 20% time set for adhoc tasks. If there are no adhic tasks, one can just spend time talking with colleagues, teams, bosses
6. Mark mails as important or not important. Keep a timeline by when you want to complete task and reply the mails
7. Use Diary / Notebook
Problem: you are given a sequence of numbers from 1 to n-1 with one of the numbers repeating only once. (example: 1 2 3 3 4 5). how can you find the repeating number? what if i give you the constraint that you can’t use a dynamic amount of memory (i.e. the amount of memory you use can’t be related to n)?
what if there are two repeating numbers (and the same memory constraint?)
Post by Adriano Stephan:
Sort of like the Paris metro map for Machine Learning :-) see the full-blown version here: https://qph.is.quoracdn.nView Post on Quora
Post by Adriano Stephan:
Scalable machine learning :: video lectures by Alex SmolaView Post on Quora
Answer by Jay Wacker:
1.) gather an immense amount of data
2.) perform relatively simple statistical analyses
3.) add expert knowledge to fix naive use of statistics
The only thing that Nate Silver (and a dozen or so different groups) showed is that the polls were accurate and treating them in a straightforward honest manner gives a more accurate answer than any single poll.
To be fair, Nate Silver corrected for numerous subtleties of the polls having to do with systematic bias, whether possibly intentional or not. They deweighted polls that were historically off or systematically disagreed with other polls. However, all of this expert knowledge corrections they introduced was independently created by several other groups, showing that standard practices gave the right result.
The key thing about taking this method forward is that without the polls (he had hundreds of polls, each of which with thousands of properly sampled and corrected for demographic populations), Nate Silver had exactly bumpkis, which he would say as well. The polls were his data. Getting the data is expensive and hard. Analyzing it correctly, if not easy, is something that hundreds of thousands of scientists are trained in doing.
So if you have extensive polling, I’d advise averaging them together combining them weighting them inversely proportional to their error. If you have historical information, identify ways that the polls systematically skew and correct for that. However, these events are few and far between.