Something a lot of bitcoiners don't seem to realize: the security of a blockchain is entirely due to the fact that it is too expensive to attack it. If your blockchain is "secured" by a motley bunch of ragtag microcontrollers, the first guy to point an ASIC at this chain can 51% attack it willy-nilly and render the blockchain useless.
PS. This is not a theoretical statement. It has happened to numerous altcoins.
PPS. In case it needs spelling out, the fact that its too expensive to attack the blockchain means that it is even more expensive to secure it (because miners need to make a profit).
How good is the compiler? To paraphrase David Patterson, its easy to get really great IPC for code from a shitty compiler, but that doesn't mean your program will run faster.
There's been a few different proposals to use functional languages to write hardware: Chisel (https://chisel.eecs.berkeley.edu/) and Bluespec (http://www.bluespec.com/) are two others. They haven't really taken off because the productivity bottleneck in hardware is not in design but verification and specifically sequential verification. Combinational verification is quote easy, because modern SAT solvers can prove almost all combinational properties you can throw at them.
The trouble comes in when you start dealing with complex state machines with non-obvious invariants. I don't think these functional languages can really help too much here because unlike C or C++ in the software world, there isn't unnecessary complexity introduced by a language, e.g. verification becoming harder due to aliasing. It's the properties themselves that are complex. Lots of progress is happening though: Aaron Bradley's PDR (http://theory.stanford.edu/~arbrad/) and Arie Gurfinkel and Yakir Vizel's AVY (http://arieg.bitbucket.org/avy/) have made pretty major breakthroughs in proving a lot of complex sequential properties and these algorithms have made their way into industrial formal tools as well.
It's definitely colonialism and the effect of modern western media.
I know this because I grew up seeing this change happen in India. Today in India everyone (obviously an unfair generalization but still) wants to conform to the white standards of beauty. This wasn't always the case. In south india, I gradually saw the change happen; each generation of film actors was more light-skinned than their predecessors.
And if you read the classics of Indian literature, a lot of the protagonists are dark skinned. The funny thing is even a lot of the Indian gods and mythological characters are portrayed as being dark skinned in the myths, but modern paintings, TV serials and movies portray these characters with light-skinned actors!
My sister-in-law from the Philippines was amazed that our drug stores have an aisle dedicated to darkening your skin (Tanning). Back home, that aisle is full of skin-whitening products.
I'm not sure if I understand what you mean by accuracy. This is not a probabilistic analysis. The solver considers the space of all possible outcomes for each match and returns satisfiable if there is some set of outcomes that satisfies the constraints. For example, it says that Stoke City can still qualify for the CL, but this requires that all of the current top 4 will need to pretty much lose every match while the teams below them win every match. This is obviously so unlikely to happen that you might as well disregard it it, but it is still a mathematical possibility and so the solver returns satisfiable.
As far as la liga is concerned, yes, you can definitely do a similar analysis. You will have to change the code that calculates the league positions, because in la liga if two teams are level on points then the first tie-breaker is head to head record, and then goal difference if the head to head is also equal. This should be a pretty straightforward change.
The code is supposed to set the carry flag when RAM[ACC] + 0x60 overflows into two bytes. Can you spot the bug? It involves my favorite C++ feature: implicit conversions, and my second favorite feature: signed chars.
The bug is that when RAM[ACC] is something like 0xFF, it gets cast to (int) -1 so the the upper byte and hence the carry flag never get set.
I predict there's lots more more evil that can milked from this fount.
I am not the OP, but I'm using Z3 in my research to help with the formal verification of hardware and firmware.
My problem is as follows. I have a microcontroller which runs firmware. I want to verify various things about the hardware+firmware combination. For instance, I may be interested in proving that user mode firmware cannot change the MMU configuration.
There are many parts to this problem. First is verifying that the kernel doesn't expose routines that let you do these kinds of things. This is a kind of s/w verification problem, somewhat well studied. Second, you want to make sure there are no h/w bugs in the microcontroller itself which the user mode software can exploit to do bad stuff. This is traditional h/w verification, also well-studied. The third is that there aren't weird corners of the microcontroller ISA specification itself which can be exploited. This is also a kind of h/w bug albeit a specification bug.
The third part is where Z3 comes in because often, there isn't any specification, and if there is, it's a bunch of PDFs or word documents. What we want is to generate a formal specification, which you can examine using formal tools to prove that it satisfies certain properties. And then we want to prove that our implementation conforms to this specification. We're using some techniques from program synthesis with Z3 as the solver for this.
Hi, I can't see an email address in your profile so sorry for the public message. With the lowRISC project http://lowrisc.org/ and related research at University of Cambridge we're becoming very interested in formal verification of hw+sw. I found this work out of Kaiserslautern interesting <http://www-eda.eit.uni-kl.de/eis/research/publications/paper.... Could you say any more about your research? I'd be very interested in an email conversation - I'm at alex.bradbury@cl.cam.ac.uk
This is the #1 reason why I won't stay in the US after my phd. I have no interest in being a random CEO whim away from either having to uproot my life and move back home or frantically try to land a new job all in the short period of 15 days.
For those curious the #2 reason is that H1Bs make it hard to do even simple extracurricular activities like writing books or do consulting on the side related to an open source project you contribute to let alone start a company.
The H1B is fundamentally broken and I'm amazed that so many people put up with it. Credit to the US for being such a desirable destination, I guess.
There are fast tracks to permanent residence after a PhD [1]. But if you don't want to immigrate (owe US taxes for foreign income, etc.) then you're right. Staying on H1B is no way to live.
You can only be on a H1B for 6 years (3 years renewed twice). After that, its either greencard or go home for a few years, and everyone I know has been able to get greencards even if the "process" is kind of tough.
However, I cannot get a greencard in the country of my residence (where I have many coworkers with USA greencards, ironically enough), so I think it is not very fair.
One of my old professors used to joke that "for every paper, there is a conference that accepts it." Unfortunately this is somewhat literally true. This isn't unique to any one country as the SCIgen successes demonstrate and I don't really see a solution to it.
Of course there are regional IEEE conferences. But in general, the international IEEE conferences and journals by the bigger societies are probably the best ones out there.
I have published quite a bit in the physical sciences/engineering field and in my experience the IEEE journals have a far better peer review process than AIP, ACS, Elsevier and even NPG. The same goes for the conferences.
What makes a conference good isn't the IEEE label but rather the reputation of the conference itself and its corresponding ability to attract a high quality program committee.
The important point here is that there isn't a shortcut to evaluating conference quality. You have to know your field, know who is worth listening to and who isn't, know what the top conferences are, what the mediocre conferences are and so forth. If a new conference springs up, you need to be able to judge whether to pay attention by looking at the PC.
> Of course there are regional IEEE conferences. But in general, the international IEEE conferences and journals by the bigger societies are probably the best ones out there.
You can't really make general statements like that. For example, CAV proceedings are published by Springer (http://www.springer.com/computer/theoretical+computer+scienc...). JPDC is a well-regarded journal in the parallel computing community that is published by Elsevier.
And a "regional" conference doesn't mean it's a bad conference. DATE (http://www.date-conference.com/) is probably the second best design automation conference behind DAC. ASPDAC, ATVA, APLAS, FSTTCS are some well-regarded regional conferences and there are plenty more like these. Not all of these are IEEE sponsored either.
> I have published quite a bit in the physical sciences/engineering field and in my experience the IEEE journals have a far better peer review process than AIP, ACS, Elsevier and even NPG. The same goes for the conferences.
I'm not trying to slight IEEE here. I looked through my publication list and it turns out all my papers are in IEEE conferences and journals! But on the other hand, a few of them could easily have ended up in a venue like CAV and this wouldn't magically make them worse papers.
The key point here I want to make here is this: there is no substitute for knowing your field, if you're judging conference quality by the IEEE label, you're doing something wrong.
I used to think if it's IEEE or ACM, they are ok (not good) to publish. Nowadays, I am not so sure after seeing a bunch of low quality "international" conferences in Asian especially in India. ACM seems a little better than IEEE...
Now my rule is to only publish in conferences that I know or are well known in the field....
Aslo reject any requests to serve as a TPC or reviewer for conferences from this region..The papers are normally low quality and waste of my time to read...
I was in Bangalore some weeks ago and I tried the query "OK google, directions to vidyarthi bhavan." And it worked perfectly. I was amazed!
The first thing that amazed me was that Google understood what I meant by vidyarthi bhavan! Note vidyarthi bhavan is a generic phrase in many Indian languages which just means 'student building.' In Bangalore, it refers to a specific restaurant in Gandhi Bazar famed for its masala dosas. And I bet there are plenty of people who live in Bangalore right now and don't know about this restaurant. The contextual knowledge here is amazing. And it wasn't just geolocation, I was about 15km away from Gandhi Bazar at that point. And Bangaloreans will know corresponds to an eternity given the traffic situation. And the software did all this by recognizing a phrase consisting of two words, both of which are not in English!
That moment was an epiphany for me, it the precise moment that I realized that personal digital assistants are here, and they actually work.
Sorry, but I think this is just Google Maps. Sitting here in the USA, typing "vidyarthi bhavan" into maps.google.com takes me to the one in Gandhi Bazar, too. Maybe because it has 155 reviews?
PS. This is not a theoretical statement. It has happened to numerous altcoins.
PPS. In case it needs spelling out, the fact that its too expensive to attack the blockchain means that it is even more expensive to secure it (because miners need to make a profit).