Hacker News new | past | comments | ask | show | jobs | submit | more dainiusse's comments login

Llm's to be fair, not AI


I did that with duron 600->900mhz, but it was so long ago that I don't remember exact date


Sure you didn’t just flip the 100->133mhz fsb setting in the motherboard? The low end durons didn’t really need the multiplier changed as they had a 100mhz fsb when the athlon 1000 had a 133mhz fsb and used the same motherboards. You could just flip the bios setting and get that exact stated overlock without the pencil trick.


My Duron 600 ran Windows 98 SE at 950mhz, on an Abit motherboard. What ever happened to Abit? Miss those mad geniuses.


They had financial problems and a class action lawsuit against them in early 2000s. The company was sold and then eventually shutdown or folded into the parent.


Those were the times. Unlocking a lot more performance just by tweaking. These days everything is locked down. I suppose this is more efficient.

The last time in felt the same awe, although not by my own handiwork, was when I moved to an M1 MacBook and when I installed Fedora on my desktop. Everything was so fast and silent. Really amazing.


I feel like a lot of the tweaking features popularized by Abit are now available on any "gamer" board from one of the major manufacturers, Intel and AMD now even make chipsets targeted at that segment, and there are a lot more bins for CPUs these days, most of which come "pre-overclocked" even dynamically so with turbo-boost and the like.

The Athlons and Durons of this era had exposed dies, just a few SKUs, no built-in thermal throttling or speed boosting at all, and oversized high-cfm heatsinks and fans felt like a new thing.


More like these days everything is already clocked hard by default. There's basically no point overclocking modern unlocked chips, they are already clocked to 90% of their max speed. Compare that to the above example with 600 vs 900 Mhz (50%! faster)

Some locked chips are clocked pretty slow, but that's normal segmentation :(


What made them mad geniuses? I saw their equipment from time to time in catalogs but was too young to realize they had some kind of special reputation.


https://en.wikipedia.org/wiki/ABIT_BP6 is a great example. It was the first motherboard to allow the use of two unmodified Intel Celeron processors in dual Symmetric multiprocessing (SMP) configuration. Not only did this cost drastically less than purchasing two CPUs certified for SMP from Intel at the time, Celerons of the era were the first Intel chips with on-chip L2 cache which was clocked at the full speed of the CPU as opposed to the off-chip L2 cache's half-CPU clock on the Pentium II. Abit was also one of the first companies to include jumperless overclocking features in the BIOS.

If you look close, you can even see the blue thermal sensors Abit placed in the center of the CPU sockets (CPUs had no built-in thermal sensor at the time) which greatly eased overclocking.

In short, they made boards that gave you the full range of what was possible, not just what was on the marketing sheet for the CPU.


Same! Dad and I did this a few times, got around having to upgrade to get more performance when we could still get much more out of the current system.


I had tried that with Duron 600 but couldn’t overclock it. I don’t remember what went wrong. Maybe the motherboard didn’t support it.


The best way to overclock the lower clocked durons was simply to change the fsb in the bios settings. You didn’t really need the pencil trick as they had 100mhz fsbs in an era where 133 was more common.


I remember doing that on my Athlon XP. Had too much issue with cooling though. (Probably my fault)


No need for that mate, just deploy home assistant or something similar and you will get this (and more) out of the box


Grafana is a hell of a lot nicer & controllable than HA

HA is great, but it's not the answer to everything


Why not both? You’ll need to run a server either way.

HA can export data to Prometheus. Setting up and running HA is much easier than figuring out how to get a set of different smart devices to export metrics to Prometheus/Influx. Let HA deal with that.


Agreed.

I live off grid, so energy monitoring is a big deal for me. HA is fine for “at a glance”, but if I want any kind of detail, I use grafana. I actually have my old openhab instance still running purely as I can’t be faffed setting up all the piping from MQTT into influx again.

It’s also possible to integrate the usage over time using a dynamic time window to get Wh figures from wattage, which is enormously useful for me, and is more accurate than the figures HA gives in their power system.

HA is dead useful for getting alerts when the laundry finishes, though - dumb machine, smart plug, look for a sudden drop in power. Also does all our climate control.

So different tools for different jobs.


Seconded - HA's graphs are great for a simple "is this going up or down" glance but when you want to put a whole bunch of things together for comparison or perform aggregations or calculations, that's when you want Grafana et al.


It might be, but for all of the examples in the blog post, HA does this out of the box.


Right up until the Home Assistant UI turns into a lagfest, the installation dies, and you can't debug why because Docker. At least that's what happened to me. And no, it wasn't RPi SD power issues. This happened on an otherwise-stable amd64 server.

The Home Assistant authors' hostility towards simple native distributions is now a show stopper for me. Long term reliability is more important than quick initial setup.


HA is actually pretty debuggable. Just install the SSH plugin, then SSH into the HA box, and then simply "docker exec" into the target HA container.


... and then not have any of your usual development tools, environment, system layout, or repair techniques because you're inside someone else's "works on my system" that they threw over the wall.

It's obviously possible to debug what goes on inside a Docker image. It's just not something I'm particularly interested in dealing with, especially under duress.


> ... and then not have any of your usual development tools, environment, system layout, or repair techniques because you're inside someone else's "works on my system" that they threw over the wall.

The thing is, the "it works" is reproducible because of containers. Which is a step above just hoping that it works.

HA is also easy to "patch". You can just install your custom components in `config/custom_components`, it can also be used to "override" core HA files.

Finally, if you are doing intrusive development, you can easily launch HA locally. macOS, Linux, and WSL are supported. You will lose the ability to install add-ons via the addon manager, but that's about it.

FWIW, I had the same aversion to their custom OS and their crazy container-based setup initially. For a couple of years, I used to run HA as a Python app and managed the dependencies manually. Then I tried the HAOS and it... kinda just worked.


> because you're inside someone else's "works on my system" that they threw over the wall.

FWIW, this can also be called stable state you can retreat to. And build upon, e.g. adding a layer of debugging tools.

I don't really like to deal with Docker, but at least I have reasonable certainty it'll work. I prefer system package manager or MSI, but if not that, it beats having to build something when it's near-guaranteed that what I'll get is not the binary the authors had in mind, if it even runs at all.

(Then again, I routinely rebuild Emacs to stay on the bleeding edge. But it took a while to work out all the usual dependency mess, and I even broke my system once doing it.)


It's certainly within my gamut to jump into an embedded system to debug it, bringing/building tools as I go. I'm just not looking to opt into doing that on something that doesn't need to be that complex in the first place. Same reason I run one decently powerful amd64 server that does many things rather than a stack of Raspberry Pis, one per software package.


But how would you do it differently?

You need to host a bunch of daemons (MQTT, ZWave and ZigBee bridges, and whatever else you might need). And a bunch of these daemons can have their own gnarly dependencies (e.g. they can be written in JS and built with NPM, ugh).

So you kinda _need_ to use Docker to make it at least sane.

And if you're using Docker for the plugins, then why not use it for the HA core itself?

And once you do that, you don't really need much from the host system. So why not use a minimalistic OS instead of something like Debian?


At the time my setup didn't require other daemons like that. But if I had been in that position, I would have just set up the other daemon under Debian and pointed HA at it.

These days I'd say that NixOS captures that requirement, allowing orchestration of many daemons and other system config to be abstracted into a packaged solution (eg NixOS Mailserver), that the user can override as much or as little as they'd like.

I believe NixOS does package (or at least attempts to package) HA, but given my past experience and what I believe is still the throw-it-over-the-wall desire of the HA maintainers, I'm wary of adopting it as an overarching solution. I'm certainly not ruling it out for performing some functions, like UI. I just would rather set up my automation efforts as MQTT-first, keep logging and automation rules as their own separate things, and not be fully committed to HA.


> At the time my setup didn't require other daemons like that. But if I had been in that position, I would have just set up the other daemon under Debian and pointed HA at it.

You can do that just fine even now. I'm doing experiments with voice control, and I run the complete AI stack locally on my computer. So I just set up everything as regular background processes.

You just can't expect HA to be able to do autoupdates for these daemons.

The other problem is that most of required dependencies are not packaged in Debian. So you'll have to install multiple NodeJS servers and tons of NPMs somewhere on your system.

> These days I'd say that NixOS captures that requirement, allowing orchestration of many daemons and other system config to be abstracted into a packaged solution (eg NixOS Mailserver), that the user can override as much or as little as they'd like.

You can do that with HA as well. Just push in a new image, and tag it appropriately.

The last time I played with Nix, it needed to download tens of gigs of data for a few programs. I don't think this is acceptable for HA.

You can definitely do HA in a piecemeal fashio, but there's just no way it can be done as a reproducible system that you can give to your grandmother. Given these constraints, HAOS is actually pretty remarkable.

> I just would rather set up my automation efforts as MQTT-first, keep logging and automation rules as their own separate things, and not be fully committed to HA.

Raw MQTT still needs a UI that is user-friendly. And even with MQTT you'll need to run ZWave and ZigBee bridges.


> You just can't expect HA to be able to do autoupdates for these daemons.

I'm not expecting or even wanting HA to do autoupdates. A good framing of the crux of the problem here is that I want to use HA but not HAOS.

> even with MQTT you'll need to run ZWave and ZigBee bridges.

Yes, the point is wanting to keep them as part of my overarching OS-level deployment config so that I can manage them along side email, nginx, matrix, netfilter, hostapd, kodi, etc.

I only brought up NixOS specifically because you asked for an example of a different approach of encapsulating and abstracting service configuration. I'm happy using NixOS, regardless of what you consider a dealbreaker. I used to choose Debian instead. If you prefer HAOS then please continue using HAOS. If I had to create and hand off a machine to my "grandmother", I might even choose HAOS for that myself. We shouldn't need to argue about distributions when talking about software packages.


> I'm not expecting or even wanting HA to do autoupdates. A good framing of the crux of the problem here is that I want to use HA but not HAOS.

You can do that. It's not even hard, the HA documentation is pretty stellar in that regard: https://www.home-assistant.io/installation/#advanced-install...

The HA team rightly doesn't want to officially support it, to avoid being inundated by people who don't want to keep the pieces.

> Yes, the point is wanting to keep them as part of my overarching OS-level deployment config so that I can manage them along side email, nginx, matrix, netfilter, hostapd, kodi, etc.

Then this is just not going to happen, unless the world changes a lot. There's just no way something like HA can be both useful for most people, and be released according to the Debian Stable calendar. HA has to move fast to adapt to third-party API changes, new integrations, and to just be able to bring features to users.

> I only brought up NixOS specifically because you asked for an example of a different approach of encapsulating and abstracting service configuration.

NixOS is not that much different from the HA approach. You also can't just get into the NixOS system and edit random files in its storage tree, you'll end up with a broken system. So you need to create a new flake, and then do the changes within this flake's env. If it's a deep dependency, you'll need to modify the dependent software to use your new patched version.

Of course, nix is far more flexible than HAOS, but then they also are made for different kinds of users.


Back when I was using HA, Core and Container did not exist (at least as first-class recommendations), so I'll admit not having been really aware of what they were. Core would have meet my deployment policies at the time, and if it had existed I would have gone that route instead of Supervised and been much happier. So I will give credit there for HA getting better distribution options since my poor experience.

> There's just no way something like HA can be both useful for most people, and be released according to the Debian Stable calendar

People running Debian Stable expect and want slower updates. It's a feature, because things don't change out from under you. It means perhaps not being able to use some new device, but it also means that your current setup just doesn't break/change out of the blue to accommodate some new feature. Essentially the reasons you've said are already being taken into account by people running stable - like yes, running a HA moderated by Debian Stable while trying to use fleeting online APIs is going to be a bad time. Just like trying to use yt-dlp out of the Debian repo is.

> NixOS is not that much different from the HA approach

Sure, but the difference is that I opted into using NixOS as my OS distribution to meet my needs for my entire environment. Whereas HA pushes using their HASS [0] mini distribution as part of using HA. We've discussed the necessary reasons for that, and I agree that the all-encompassing solution makes sense for many people. But the fact remains that is essentially managing a new instance of a bespoke distribution. And that's what really made for my negative experience.

With the advent of Core, it does seem like my previous specific situation has been addressed. But the memory of my experience remains, and then I see things like https://github.com/NixOS/nixpkgs/pull/126326 which make it look like that same rejection of the larger ecosystem dynamic is still alive and well. It just gives me pause, regardless of the continued existence of HA-on-NixOS.

As I said, I'm certainly not against Home Assistant. I'll eventually try using it again when I want some kind of easy UIs for my automation setup. The problems I'm currently solving really just require logging, graphs, and automation rules. And so I've just decided to focus on MQTT-first as the nucleation point, rather than putting all my eggs in the Home Assistant basket again.

[0] Whatever the mini Linux distribution that runs inside the Docker container is called. When writing the previous comment I had thought that was HAOS but now I'm seeing that HAOS isn't included in Supervised or Container. I believe it was called HASS back then, so maybe it's still HASS?


It's a Python app, of course being distributed as a docker image is the sanest way of doing it. I don't see why you couldn't just pip install it if you really wanted, but having been a Python developer for close to two decades, I wouldn't want to.


That was the standard way a long time ago, and the first startup would take a really long time because it would install even more stuff. And sometimes fail. It wasn't very reliable if you used any addons, and some required a ton of extra steps that it couldn't automate like the modern deployments do now.


I'm talking about distribution package managers, not pip.


I happily ran a dockerized HA on a Debian for years now, no need to do any complicated debugging (and even if I did, it would not be difficult to inspect it properly)


Dockerized HA on Debian is exactly what died on me. About 5-6 years ago. I'm sure it works just fine for most people. Just once bitten, twice shy.


Nobody is preventing you from running Home Assistant core and deploying everything else yourself manually.

Demanding the authors who gave you the software for free also provide support for an installation method they've offered up with no support is a bit ridiculous, don't you think?

That attitude is what causes open source projects to die though...


What do you mean "demanding support" ? I remember Home Assistant authors being actively hostile to people packaging their software outside of the official Docker or RPi images. Which is why it wasn't in the Debian repository, pushing me down that Docker path in the first place. Here's the same dynamic on an associated project in 2021: https://github.com/NixOS/nixpkgs/pull/126326

If anyone chimes in and says they've been running Home Assistant from nixpkgs (where I am now) for several years with no hiccups, then I will certainly reconsider my opinion. But based on my experience and what I've continued to read since, it feels like trying to do that is an uphill battle. One I'm not looking to take on, especially for automation I'm relying on.


So... your example is a developer of HA stating that he sees major flaws with how they're distributing his package, and that he has absolutely no interest in supporting users that pull his code in a way that is unmaintainable by him.

YOU believe he should support this anyway, because of various "we promise end-users won't reach out to you" which is comically incorrect because history has shown repeatedly that a user's first step when something is broken is to google package_name broken - which will absolutely turn up the author's name.

BECAUSE he doesn't want to support his software being repackaged in a way he believes isn't supportable, you're upset. You want him to support your unicorn config because that's what you want to do, and his refusal to comply makes him a bad person.

Thank you for reinforcing EXACTLY why open source devs burn out. He has a workflow that he is willing and able to support and doesn't want to support anything outside of that. Your response is: but you need to do it for me because it's what I want.


Did we read the same thread? Nobody asked the HA developer to support anything, rather that developer started the conversation by making demands and then kept at it.


“Making demands” which were: please don’t package my code in your distro that has dozens of out of date packages my code depends on that will break. Because I don’t want to deal with end users bugging me about it being broken.

I think the most surprising thing is that you can’t see how unreasonable your complaints are.


If you attempted explaining how you think my stated position is unreasonable, perhaps I could see it. So far you've only attacked strawmen, such as claiming that I am demanding support from HA or claiming upstream was being asked to support nixpkgs.

What I do see is a project calling itself FOSS, while its maintainers really don't like it being used as Free Software. If one wants to control downstream uses of one's software, the answer is quite simple - release it under a proprietary license. Don't grant freedom while going on and on about how you support freedom, but then be upset when someone actually uses that freedom to do something.

> deal with end users bugging me about it being broken.

The nixpkgs maintainers asked how much this was actually happening, and even preemptively proposed solutions. OP didn't engage and just repeated his demands. And in general how is this any different from the common DRM-authoritarian refrain that companies are justified locking down devices they make, lest end users modify them and then clueless people might attribute the outcome to the original manufacturer?


I'm also looking at a custom solution for my current migration from WiFi sockets to Zigbee. It seemed impossible to do an offline installation of home assistant, and discouraging signs for running it without an internet connection.

There seems to be a sonoff usb stick that might act as a hub and allow command-line monitoring of all devices, should be perfect for feeding into grafana/prometheus.


HA will happily run offline; if you mean HAOS then I don't know what it does but it's an unorthodox Linux distro, but once it installs it should also run offline without issues. I'm also using their skyconnect zigbee coordinator and it works very well.


Yeah one of the tests was a RPi image and it wouldnt complete without a LAN internet connection (only got 4G). And it seemed far too weighty for a bit of home automation.

I recall the online requirement was for some ntp server requests that cant be disabled.


Yeah that's more of a rpi hardware requirement as it doesn't have a battery and you realistically want to have accurate time on your smart home controller, even - especially - after it cold boots after power loss.


Why not both?


Yep, I've got one and don't use too much. Too big for scrolling, too limited (software) for work. But Apple knows iPad might cannibalize mac and limit it's uses on purpose


> But Apple knows iPad might cannibalize mac and limit it's uses on purpose

Felt the goal was to overtake Mac during the 2015-2019 era, all the real engineering focus was on iPad, the Macs were underpowered and not really fit for purpose.

Why would Apple choose a platform where they don't get 30% of every Creative Cloud sub when they could have had that.

Only reason they backtracked was because Mac sales didn't fall off and the iPad just isn't that good to do real work on.


I believe it's simply more lucrative to keep selling both devices to the same target group, than try to solve the users' problem with a single device.

Everything in Apple is designed to silo off the two product groups.

An "iPad with MacOS" would just shift revenue from the MacOS division to the iPad division, losing a MacOS customer and probably NOT gaining a iPad customer (as he would have purchased an iPad anyway).

Just as much as developing an MacBook convertible is not an issue of user experience but an issue of unnecessary cannibalization of iPad sales...


By that logic, the iPhone wouldn’t have been able to play music as soon as it launched. Yet that was part of the whole pitch: “an iPod, a phone, and an internet communicator”.


Not so sure.

From mid-to-late 90s onwards a mobile phone was basically an essential item.

I was never tempted to buy an iPod, but combine the phone and iPod and give me internet access to boot... sold.


> I was never tempted to buy an iPod, but combine the phone and iPod and give me internet access to boot... sold.

Before the iPhone there were already phones which could play music and access the web. I even remember some Motorolas which interacted directly with iTunes. The iPhone didn’t succeed just by smooshing those together.

Either way, that’s neither here nor there, the point is precisely that Apple didn’t shy away from cannibalising their own product.


It was cannibalizing a cheaper iPod for a more expensive iPhone. iPad would be taking from the more expensive MacBook market.


I don't know how it is relevant what Apple did on other products, especially "pre-iPhone".

The point is that TODAY the PC line and the iPad line of Apple are quite notable silo'ed to very specific usage-patterns.

There is no technical reason for that, but the distinct commercial reason that there is nothing to gain in terms of revenue or profit by combining the two products into one.

They both sell fine and at great margin separately, there is little to gain by building an iPad Pro that is 2000 USD and supports the use-cases of both a 600 USD iPad and a 1600 USD MacBook respectively.

Quite bluntly: You want the iPad to be convenient in a workflow as far as possible, and then SUCK really bad in a way only a fully synchronized Macbook can fix.


The iPod is a product of the pre-iPhone times. Apple used its dominance in Music players to enter the cellphone space.

The iPhone was an iPod combined with an iTunes store, allowing the user to buy content without being in front of a PC, and only buy from Apple.

It was an iPod and a Browser that could be sold in huge volumes via a carrier.

Ah yeah. And a Phone.


And then the iPod died.


Yes, exactly, that’s the point. Apple did it to themselves. They didn’t “silo off the two product groups”.


Then either your point is the same as the one I made, or I don't get your point.


I still have my own


This is the same reason behind the Apple Pencil not working on the iPhone. Despite the iPhone approaching sizes of an iPad mini, I can't use the incredibly expensive pencil on an iPhone because according to Apple only the iPad should be used for tablet stuff.


What? The Apple Pencil works because there’s a special digitizer layer on the screen for pencil compatible devices that allows it to work. This isn’t included on the iPhone. Same reason a Samsung S-Pen doesn’t work on devices that don’t support it.


I think the technical reason why the Pencil doesn't work is beside the point here.

Apple is building the hardware, and they decide that the Pencil use-case a iPhone user may have shall not be covered by buying an Apple Pencil, but by buying an iPad (and a Apple Pencil)


The technical reason is important, though. If it was totally free I suspect they’d allow it to function, but it doesn’t… so burdening the 200M iPhones with the additional cost of the pencil hardware is a trade off not worth taking. Just like Samsung not “allowing” S-pen to work on most of the phones since adding the digitizer element would be a silly cost adder, especially for their super cheap phones.


It's a decision of product proposition, and Apple decided that the Pencil use-case shall support iPad sales and not be cannibalized by the iPhone.

They also decided for a while that all their premium iPhones shall have "Force Touch", an entirely unique display technology only for iPhones to sense pressure without the potential of additional accessory sales.

These are all valid decisions. They are not a charity, they operate to maximize the profit they can gain from each customer.

The iPad has the big "issue" of barely needing to be replaced with new models, as most use-cases are consumption-oriented and there are no real disrupting sales-driving requirements for iPad media consumption.

So the Pencil was created to drive the proposition towards Media CREATION, because people would buy a new, more-expensive iPad then and requirements for that segment are constantly increasing (better pencils, lower latency, more-demanding apps).

Also in the past year: iPhone increases focus on Media recording with more-complex video features, iPad is tagging along with demanding Media processing use-cases


Wasn’t that the period when Apple were positioning themselves to get the Macs away from Intel? I’m not sure the goal was to let the iPad overtake as much as it was to get its processors ready to take over from Intel.


> iPad might cannibalize mac

Do not think it's possible. Traditional Mac computers can win in so many ways


> But Apple knows iPad might cannibalize mac and limit it's uses on purpose

Apple isn’t afraid to cannibalise its own products. They did exactly that with the iPhone in regard to the iPod. If someone is going to displace one of your most successful products, it better be yourself with something even more outstanding.

It would have been in Apple’s best (financial) interest to have the iPad cannibalise the Mac because they’d have more control and earn more money from app sales.


This is nowhere close to being analogous. Apple could sacrifice iPod because they had much bigger fish to fry.


Not sure if this is true. I mean wasn’t the vision that you actually don’t need the mac for most things when the ipad came out?


For certain groups of people (the majority?) that is reality, as long as you don’t need compilers, IDEs, or virtualization you can do pretty much anything on an iPad.


That is unrealistic expectation for 15 years, especially for a 2017 EV.


Our 8 year old 2015 Kia Soul EV is about 15% degraded (on the worst cell, which could be replaced individually btw). Battery degradation is generally fairly linear between 100% to 70% SoH. So I don’t see why I shouldn’t expect another 8 years on it.

It depends on how much it is driven of course. The mileage is probably a bit below average on that one. I have a short commute to work. We did some road trips in it though.

You gotta take into account that old EVs with small batteries will probably often be used as a second car, and so used less often. That’s the case for us since we got a newer bigger car as our family grew. We don’t take the Kia on road trips anymore and my wife takes the bus to work.


15 years is completely realistic, I only started having problems with my 2007 Ford Escape Hybrid after 17 years of heavy use and 260k miles. Keep in mind that 2007 was when the 1st generation hybrids were just coming out, too, so technology is much better today.

Don’t forget that there are other parts of a car that degrades as well with time. Eventually most well used cars will have an engine or powertrain issue if they even make it to 15 years to use without being totaled in an accident or simply sold second hand or misused.


The 2007 Ford Escape Hybrids were using a lot of shared parts with the Toyota Prius, so the battery pack is definitely not Li-ion. The newer high-density Li-ion packs will not survive 15 years without loosing significant capacity, if they survive at all.


The escape hybrid had a nickel cadmium battery. I don’t see why a li-ion battery pack wouldn’t be usable for 15 years with good battery management. Both battery and ICE cars have reduced performance and range due to capacity/efficiency degradation so replacing and recycling a battery or engine at 15 years to restore performance is not unreasonable. No car, EV or ICE lasts forever. Likewise, most people want to upgrade to a newer flashier car before 15 years.


> I don’t see why a li-ion battery pack wouldn’t be usable for 15 years.

For 2 reasons, primarily. First, Toyota chose nickel-cadmium batteries because of their higher charge-cycle life, at the cost of a much lower energy density (about half that of li-ion). Secondly, they designed their system so that the discharge rate and the discharge level of their battery pack stayed low, thereby maximizing the lifetime of their battery pack, at the cost, once more, of extra weight.

So those two aspects combined mean that you can expect a much longer real-life usage of your battery pack before reduced performance becomes an issue.

I fully agree that a battery pack replacement after 15 years could be considered reasonable (as long as the build quality of the rest of the car warrants it, which is not a given nowadays). But if the mean-time between replacement is around 5 years, then it becomes unreasonable.


The battery chemistry is completely different; You can't assume similar characteristics.


Because we have plenty of real life examples for cars that have a lot less age.


Is it? The article lists 2015 as the year where things improved a lot, 2017 is well past that. The numbers are low and even that's inflated due to recalls.

I've seen >>10 year old laptops where the battery is still good enough to go from charger to charger. Just go to ebay and check out 2009 MacBooks. That's ~15 years now.

I don't think this is unrealistic if you can live with the heavier degradation.


I agree it is cheap, but I want to at least try something without paying.


It's probably important that consumers vote with their wallet here


I don't think they can offer you a trial option when Apple charges them by installs.


Agree, it will probably take EU a few more iterations to force Apple making it usable.


Hallucination^2


I think it is too early to evaluate Terraform/OpenToFu. They're diverging now and it looks like OpenToFu are bringing on some wanted features.


I agree. It has only been a few months since the split. I have noticed more and more uptake of OpenTofu amongst colleagues, and I've personally switched. The thing that makes the difference is what is running on people's laptops, because that's what people will eventually put into prod.


> The thing that makes the difference is what is running on people's laptops, because that's what people will eventually put into prod.

"It works on my machine!"

"Then we'll ship your machine"

Docker: https://miro.medium.com/v2/resize:fit:720/format:webp/1*Ibnw...


Lol plain old VMs have been shipping your machine since well before Docker was around.


And in some cases, unfortunate git commands will ship your machine too!


That works while OpenTofu and Terraform files are compatible - but once they no longer are, presumably you'd have to standardise on one or the other.


The point is that once they are no longer compatible, people would standardize on the one that they're familiar with which is most likely the one that's running on their machine.


There are enough pretty annoying and long standing terraform issues that if opentofu started picking them off I'd consider switching.

You can kinda see this with vim and neovim where both are continuing to exist and benefit each other.


Encrypted state files are either done or coming soon. That's going to be a big one, since Hashicorp used that as a selling point for Terraform Cloud.


But currently, people are equally comfortable with both; the CLI commands are exactly identical between the two, save for the name of the binary itself. In any org where both are in use, if people are forced to choose at some point, they will have to balance many other factors besides familiarity, such as features and confidence in the platform.


> The thing that makes the difference is what is running on people's laptops, because that's what people will eventually put into prod.

I disagree—I think support of deployment tooling (like Atlantis) is the bigger proof. If you are running terraform on your local machine it is likely a very small company.


There is no incentive for users of tf to move, consumers are not impacted by the licensing changes.

Opentofu hasn't shipped a 1.7 stable with removed blocks yet, whilst terraform is already on 1.8 with provider functions


Hey, tech lead of the project here!

Just to clarify, provider-defined functions are coming in OpenTofu 1.7, along with e2e state encryption. Generally, I recommend not comparing version numbers of Terraform and OpenTofu post-1.6.

Implementing the e2e state encryption was non-trivial, and we wanted to make sure we get it right, so that's why the release took us a while. We also got a slight additional delay due to needing to handle the C&D letter OpenTofu got from HashiCorp[0], but that's all sorted now.

The beta for 1.7 however is coming out this week, with the stable release planned in the next ~3 weeks.

[0]: https://opentofu.org/blog/our-response-to-hashicorps-cease-a...


I'm definitely in the camp that has moved my tiny company infra to opentofu. Thanks for all your hard work.


That’s awesome! Appreciate the kind words :)


In the very early days of Terraform, when it was 2 months or so old I helped a little. How many people did (so much more than me) with all these projects to be later betrayed by relicensing.

    > git log --pretty=format:"%h %an %ad %s" --date=short | grep "Luke Chadwick"
    dcd6449245 Luke Chadwick 2014-07-30 Add documentation for elb health_check
    0eed0908df Luke Chadwick 2014-07-30 Add health_check to aws_elb resource
    96c05c881a Luke Chadwick 2014-07-30 Update documentation to include the new   user_data attribute on aws_launch_configuration
    15bdf8b5f9 Luke Chadwick 2014-07-30 Add user_data to aws_launch_configuration
    8d2e232602 Luke Chadwick 2014-07-29 Update documentation to reflect the addition of associate_public_ip_address to the aws_instance resource
    974074fee9 Luke Chadwick 2014-07-29 Add associate_public_ip_address as an attribute of the aws_instance resource


> How many people did (so much more than me) with all these projects to be later betrayed by relicensing.

Were you betrayed? They did a thing you licensed them to do. That’s the whole point of non-copyleft free software licenses, after all! It’s kind of odd to specifically choose a license which allows others to use one’s code in proprietary software, then be upset when others use one’s code in proprietary software.

If one wishes one’s software and its users to remain free, the answer is to use a copyleft license.


They can and did use it in commercial software before relicensing. I don't have a problem with that. It's a betrayal to get a huge community together under one expectation and then decide you don't like that expectation any more. Had they used, even something like AGPL from the start it would not have been successful in the same way, would not have gotten the same levels of outside contributions, so yes it's a betrayal.

It's a limited betrayal, because that license also allows for OpenTofu to exist and fork, but the need to do that is just annoying.


Just to be clear: MPLv2 is a copyleft license.


Doh! You’re right.


Don't use your project (nor Terraform) but great project name!


Anecdotally, I know several teams likely to adopt OpenTofu when state encryption ships https://terrateam.io/blog/opentofu-feature-preview-state-enc...


IIRC, Gitlab runners gives you a big warning with tf telling you to use opentf, so that provides some incentive.


There's also not any incentive to use the original terraform.


Don't do that for banks. I had a reverse problem. Went to usa (comming from eu) and used vpn for accessing bank portals. I paid using my card somewhere in sf, and then logged in to my bank using VPN. They blocked me because of possible fraud…


This is why he is setting up his own exit node and paying for his own instance. Using VPNs is an easy way to get a block from your bank. VPN IPs are frequently added to denylists because they have a high chance of being used by bad actors. Although to be honest, cloud providers also get ban-hammered from time to time.

The best thing would be to have your own physical machine act as an exit node instead of relying on a cloud instance. That would bring a whole series of new problems for keeping a machine up and running while you are away, but doable


It's not about IP reputation, but the bank detecting a payment in one country followed by a login in another. This is exactly what the bank would see if someone stole your card.


I used VPN that was setup in my house not a public one. I clarified with the bank - they saw me logging in from my country and then used my card in SF and blocked me. Private exit node will not help.


Seconded. Banks have all kinds of IP access rules that can bite you. Connecting from an IP allocated to one of the well known clouds will certainly raise a flag as throwaway VPS are often used for fraud.


It's much better to have an exit node at home.


It won't help. That was my case. It is simply geolocation differenxe


I understand why they do it (fraud mitigation in layers) but it's still ridiculous. Most US banks don't even offer TOTP for second factor; they force SMS 2FA which is not secure at all.

Maybe once passkeys or hardware keys are widely adopted they can remove the atrocious fraud detection.


Many people (still) don't understand how 2FA works.

Whereas 99.999%? of the users won't use tor/$vpnCompany/cloud provider IPs. It's all in terms of which is less likely to lead to a support request.

I worked with a relatively big bank that used a tool like Akami or Cloudflare. With those tools you can just ban/block entire countries (think any IP from Iran, Russia, etc) or entire ASNs.


you were almost certainly picked up by the faster-than-flight rule, or as I like to call it ‘the superman rule’.

It’s probably the second most common geo rule after geoblocking.


Plus. It will be even worse for the battery as it will always be close to zero (at least in vw group 1.4 phev) I own one and like it though. But I charge daily (from solar when the weather allows so) and it is sufficient for my commutes so th engine only starts in long trips.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: