I like this idea, but I do not really understand who it is aimed at. The focus seems to be on reproducibility of scientific experiments and code, which is great! Many existing code artifacts are WOGSL (Works On Graduate Student's Laptop) which is the CS equivalent of "runs when parked".
So, let's break down the fields of CS for which this should be applicable:
* Systems: This won't work except for the few systems projects that are entirely in-RAM AND will work on tinycore's kernel version
* ML: This, I can see, especially with the seeming focus on dataset management. Much ML is compute-bound and the overhead of using the FUSE FS's is hopefully negligible.
So, is this focused on ML and ML-using code and experiments? If so, I think that should be clarified. I think a lot of systems folk will be (rightly or wrongly) turned away from it due to the seeming overhead of the various hyper* extensions. Not to mention that they are all written in Node/JS (Again, rightly or wrongly, many systems folk will not want to run their stuff on platforms written in JS)
I like the direction this project can go, but there seems to be a lack of focus or direction in your mission right now.
> So, let's break down the fields of CS for which this should be applicable:
> So, is this focused on ML and ML-using code and experiments?
Your completely missing the point. Please look into 'Computational Science' (or Scientific Computing, or Numerical Analysis), that applies to 80%+ of all disciplines that exist today (e.g., computational physics, comp. biology, comp. economics, comp. aspects of engineering disciplines, the list goes on).
Yup, I can see it for computational experiments or "applied CS" fields. I realized this soon after I posted the comment, but I didn't bother to update my comment.
However, this still isn't clear in their website. I will give them the benefit of the doubt since they are early in their project, but I think it would behoove them to nail down their mission sooner rather than later.
This is probably what I get being in the CS bubble. =)
Well to their credit, they did mention "scientific research reproducibility" which is a very well known phrase in computational circles.
But I agree, it would help if they expand on this from the pure CS point of view. Especially if they mention things like containers, CS people would be interested in finding out what they're up to.
I guess you could also say that CS is one of those "applied math" fields :)
Seriously though, this kind platform is a critical component in scientific reproducibility. The dream is that we can have code, data, and the results of the composition of the two in the same revision control system. A minimal layer to allow the execution of linux software would support the use of legacy code and binaries in this new platform. Javascript has its advantages, but it's a waste to build a data RCS and require all functions on the data to be written in it.
And to go a bit further, it's not just for science. For example, you could write a HN clone in dat. I could fork it and get both your code and all the posts.
Microwave works on line-of-sight and weather patterns easily mess it up and drop the link.
That's why you usually also run your microwave data over a fiber link as backup. Of course, when microwave works, it gives you a decent improvement in latency, at the cost of bandwidth.
Another advantage of having the microphones (yes, there can be more than one!) near the ear itself is for DSP. Many hearing aids use multiple microphones to selectively amplify sounds from the direction you are facing. I think this would work best with microphones near the ear, or at least in a known position.
A bigger issue with separating the microphone and amplifier is cosmetic: hearing aids sadly have a stigma attached to them and people are more likely to wear them if they are invisible. There is a reason why hearing aids are produced in hair colors.
Also, I found that the best way to get rid of feedback was to get an ear mold rather than using an "open-fit" mold. This is a clear separation between the speaker and the microphone and pretty much solves the problem in my experience.
EDIT: I noticed late that you addressed the cosmetic issue in your post. I don't see the elderly changing but our generation just might.
"I don't see the elderly changing but our generation just might."
I'm youngish (mid 30's) and recently had my hearing aids replaced and realized I had a strong preference to stick with fairly visible behind-the-ear units rather than something more "discrete". I want them to be visible so that people I'm interacting with will be more sympathetic about repeating themselves and may make an (possibly unconscious) effort to speak more clearly.
The line of thinking that got me over being self conscious was "Lot's of people walk around with assistive devices for their vision...why should I be embarrassed about the same thing for my hearing?"
My first experience with a close friend who had partial hearing loss lead me to realise how much lip reading helped her. If she wasn't looking at your face her responses would often be nonsensical.
Also, I work in a noisy environment where hearing protection is mandatory, and I find I have less trouble understanding people if I can see their face.
That's very good reasoning. If a person with visual impairment walks around with a white stick, it's obvious and people normally cater for their needs. It shouldn't be any different with hearing.
What a good point.
I have an elderly friend who has suffered complete hearing loss in one ear after an infection and the other ear can only detect very very low frequencies, and he's constantly saying "PARDON?". It must be very difficult to hear ANYTHING going on, other than the rumble of lorries and buses. I wonder if they could put a pitch-shifting circuit in his hearing aid to shift sounds up/down so that they fall within his hearing range, whilst not shifting frequencies already in that range. That would help significantly, surely?
> I wonder if they could put a pitch-shifting circuit in his hearing aid to shift sounds up/down so that they fall within his hearing range, whilst not shifting frequencies already in that range. That would help significantly, surely?
If you read the article, you'll see that's more or less what most modern hearing aids do, via a technique called multi-band compression.
Multiband compression works by splitting the incoming audio into different bands, much like your bass/mid/treble controls on your EQ only works on bass/mid/treble parts of the frequency range. Compression is then applied to only those frequencies and then they are summed together.
There is no pitch shifting in multiband compression - pitch shifting involves moving the frequency up or down by a number of cents, semitones, octaves etc. It's the effect used to get the "chipmonk" voice (high-pitch and squeaky) where a normal voice is fed into a pitch shifter and it is shifted up or down. It is also how harmonisers work, where they work out the frequency you're singing at and shift it up 7 notes (or an arbitrary amount) so you can sing and get a harmony of yourself.
You're right, they're still fairly subtle and probably not the first thing someone would notice about me, but if I turn my head slightly, you're bound to notice my ear moulds/tube.
I too wear BTE's with glasses and most people are surprised when I tell them I wear hearing aids. They cannot see them.
This is especially true of modern "Receiver In The Ear" (RITE) models where instead of a tube carrying sound, you have a very thin wire going into your ear canal.
> hearing aids sadly have a stigma attached to them
It's probably worth distinguishing between two kinds of phenomena that might be described as carrying a stigma:
- Something might lead other people to mock or otherwise denigrate you for exhibiting it. Being fat is a good example here; fat people get a lot of messaging from society that they're worse people for being fat.
- Something might carry no real significance to the rest of society while still being viewed, by the individual, as painfully embarrassing. There's a traditional view that women don't like to wear glasses because they think the glasses ruin their looks. I don't know how well that currently corresponds to reality; I've known one girl who really hated her glasses for that reason and another who, not needing glasses of her own, liked to take other people's and wear them -- but that's the prototype of a "category two" stigma: a woman who hates wearing her glasses even though no one around her sees anything wrong with them.
I suspect that hearing aids are firmly within the second category, which means getting people to wear them "openly" should be doable.
"I suspect that hearing aids are firmly within the second category, which means getting people to wear them "openly" should be doable."
You suspect wrongly. Having seen the attitudes to my father change when he wore one. Ranging from outright verbal abuse, to assumptions of stupidity & senility.
17 years ago, I worked for a hearing aid manufacturer. The common terms in use there were BTE and ITE, for "behind the ear" and "in the ear." Frankly, I thought the initialisms were poorly conceived. ITE is three syllables, same as "in the ear," and less meaningful for the uninitiated. BTE only saves you one syllable, again at the cost of meaningfulness. But either which way, your terminology is both understandable and correct.
Off-topic: While I was there, they asked employees to submit ideas for a new hearing aid marketing slogan, with the incentive of a free vacation to Vegas going to the person who submitted the one they used. For some reason, I did not win the vacation with my suggestion: "Stick It In Your Ear!"
It would be pretty interesting to see people assume that a ten-year-old suffered from senility because he was wearing a hearing aid. By definition, it only applies to the old.
At the low end when eating out in restaurants occasionally having wait staff ignore him and asking other folk at the table "what would he like", or people assuming that he couldn't hear and talking about him — to at the high end having a guy shouting "deaf fuck" at him repeatedly on the street for no obvious reason.
I'm not trying to say that this happened every day — especially the outright insults. But it was enough to be noticeable.
I suspect, as @hibbelig commented, age had something to do with it.
Ben Heckendorn is known for very clever console modifications and has forayed into accessible controllers in recent years. He sells a custom made one-handed controller for a few hundred dollars, and this may be worth a look!
This is exactly right. GNOME, KDE and other out-of-the-box solutions make sure to set the GTK and Qt themes so that everything looks pretty. When you use a WM (window manager) rather than a DE (desktop environment), it is up to you to setup all the ancillary things such as themes and daemons. This is why .xprofile exists!
Also, I find lxappearance an easy and quick way to set GTK themes via GUI rather than having to edit the .gtkrc files for every option.
This is a very interesting problem, espeicially since consumer computing devices quickly shunted to small storage space when SSD's arrived. A few years ago, a laptop with 500GB of hard drive space was rather small! Nowadays, a 500GB SSD is top-tier. Granted, SSD capacity is quickly rising, but with the combination of small computing like phones and tablets, space is at a premium.
Funnily enough, this problem has been solved before! If you look at distributed file systems, especially something like Coda (http://en.wikipedia.org/wiki/Coda_(file_system)), they are designed to make the local computer a "thin client for storage". Basically, local storage is used as a cache for the main copy on the server, and this behavior is transparent to user applications via the FS driver.
Dropbox uses a user-space program to sync the files and cannot intercept system calls. As such, they had no choice but to synchronize all of it rather than bringing things in "on-demand" and releasing local copies that are not used. Nowadays, this can easily be done via FUSE driver in Linux. I do not know if something like FUSE exists on other platforms though.
Overall, I personally love the idea of a local machine being a "cache" for the server copy. However, the technical challenges are greater than a simple mirroring scheme, and there may be UX issues as well.
Dokan for Windows (last updated 2011) and MacFUSE for OS X do exist, but they're not battle-tested, to say the least. It wouldn't be wise for Dropbox to stake its reputation on such unreliable software.
MacFuse worked well when it was first released, but it has not been maintained in years and has no maintainers and does not really work with any recent OSX versions at the moment.
It looks like there might be something called OSXFuse which might be more recent and maintained, but I haven't looked at it, it also might not be.
Commercial FUSE/Dokan analog for Windows costs around 15K for an all-inclusive package. I'm sure Dropbox can afford it if they are interested in pursuing this option.
Thanks! Spacemonkey looks terrific and is pretty much what I envision! I often feel that with the move to "cloud computing" over the last decade, we have essentially made a dichotomy between working "locally" and working "in the cloud". However, they really should be seamless! The cloud should extend your computational resources and make it more accessible, not replace it! With things like spacemonkey, Office 365 and other "seamless" apps, it seems that we are finally learning how to make things be location and device independent. Cloud computing is the vanishing mediator that is finally starting to vanish!
Thing is, userspace networking is a lot like the GPU business. Often times, the software is free or even OSS, but the hardware is proprietary.
I know Intel has software for their NIC's to enable userpace networking, but I don't think any HFT uses it. It may be all right for hobbyist experimentation. I left the industry about a year ago, so my information is a bit out-of-date, but my former employer used either Solarflare cards with the OpenOnload stack (very good cards and awesome software) or the Mellanox CX series with VMA stack (amazing hardware with mediocre software. VMA is now open-source, so perhaps that situation has improved).
Note that these cards are a few thousand dollars each, so out-of-reach for hobbyists.
I can't comment on HFT because I have no experience on that, and their focus is latency rather than throughput.
For the latter (which matters in routers, for instance), netmap runs on everything (either natively or with some emulation). The intel 10g cards (which DPDK of course supports) are around 3-400$ i think, the 1G cards start around a few 10's of dollars (not that you need userspace networking at 1G).
My understanding is that XFS is now default because it has far better support than ext4 for large (hundreds of TB to PB) volumes. In addition, RedHat employs many core XFS members.
With the quashing of the slow metadata performance (http://lwn.net/Articles/476263/), XFS seems to be all-round just as good as ext4 but with more future-proofiness for large volumes. Keep in mind that RHEL releases are supported for around a decade.
Sure, but these are mostly just arbitrary caps. xfs performance at 50TB is supposed to be better than ext4 at 50TB. I don't know what xfs actually does over ext4, but I do know a little bit about ext4; on-disk it looks very similar to the ancient Unix FFS. xfs may use more scalable on-disk structures.
Looks like XFS supports concurrency better (both on the request side, and on the kernel<->backing store side).
Back in the day, we used xfs instead of ext3 so that when something happened and there was a bad shutdown, our samba servers weren't stuck in fsck for an hour (or a day).
So, let's break down the fields of CS for which this should be applicable:
* Systems: This won't work except for the few systems projects that are entirely in-RAM AND will work on tinycore's kernel version
* ML: This, I can see, especially with the seeming focus on dataset management. Much ML is compute-bound and the overhead of using the FUSE FS's is hopefully negligible.
So, is this focused on ML and ML-using code and experiments? If so, I think that should be clarified. I think a lot of systems folk will be (rightly or wrongly) turned away from it due to the seeming overhead of the various hyper* extensions. Not to mention that they are all written in Node/JS (Again, rightly or wrongly, many systems folk will not want to run their stuff on platforms written in JS)
I like the direction this project can go, but there seems to be a lack of focus or direction in your mission right now.