Hacker News new | past | comments | ask | show | jobs | submit | flourophore's comments login

How can a computer evaluate the 'realness' of a review? The idea of Yelp, crowdsourced business reviews, is great, but do we really prefer trusting super active web content contributors, seeking cyberspace attention, over experts (eg Zagats)?


From the perspective of a web searcher, a SERP with reviews for four [Buffalo Grove dentists] written by amateurs beats a SERP with zero reviews written by experts.

From the perspective of a large advertising firm, consistently replicating that at scale gives you a lot more inventory to sell.

One could say something similar regarding content farms and big daddy G.


I think they do things like filter out reviews from a user that has only posted one review, or if your name appears to be fake, etc. But who knows how exactly their algorithm works.

They're afraid of competitors leaving negative reviews for a business and there's probably something to that, but there's no transparency about it. And they've been doing the extortion thing for a while now: http://dealbook.nytimes.com/2010/03/19/yelp-under-fire-from-...

I always thought it would be great if there was a Yelp for landlords - because landlords are rarely held accountable and it's not like you ask for references before you sign a lease - but there would probably be too many false reviews posted to make it work or tenants wouldn't feel comfortable speaking freely (because unfortunately tenants still need references). There's actually a website trying it, but doesn't seem like it's going anywhere right now: http://www.donotrent.com

Hopefully people will see Yelp for what it is: a small sample of the population that may be dissimilar to you writing probably biased reviews.


Why downvoted?


Zagats is primarily crowdsourced.


New command for collection-level compaction, Better concurrency, Index Enhancements, Authentication with sharded clusters, Replica set priorities (set preference for which server should be primary), Data-center awareness, Polygon searches, ..a few other things...it's all in the release notes.


Yeah, but are there any performance improvements? I'll be waiting to see the blog posts about it that are undoubtedly forthcoming.


I can tell you I'm excited about the new indexes...this weekend I'm going to restore our production database into a local 2.0 instance and see what type of memory savings, if any, we get (since I'm cheap and ram is my biggest issue) (it won't be as easy for me to measure the possible improvement in performance mentioned).

"Indexes are often 25% smaller and 25% faster (depends on the use case)"


Huge performance increases, actually. The write lock yielding whenever the DB is hitting the disk is huge and is going to resolve a large number of my production complaints with it. The improved indexes are also quite tantalizing.


If you are in the NYC area, you should check out the MongoDB track at PgEast next week to find out ;) https://www.postgresqlconference.org/


10gen develops and supports MongoDB - the open source, high performance, scalable, document-oriented database. 10gen delivers technical support, professional services, and training for commercial-grade deployments of MongoDB.

New York, NY: Openings for senior and junior C++ Database Engineers

Bay Area, CA: Looking for a Driver Development Engineer and Senior Support/QA Engineer

More info at http://bit.ly/10genjobs


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: