I once saw a talk from Brian Kernighan who made a joke about how in three weeks Ken Thompson wrote a text editor, the B compiler, and the skeleton for managing input/output files, which turned out to be UNIX. The joke was that nowadays we're a bit less efficient :-D
Ken is definitely a top-notch programmer. A top-notch programmer can do a LOT given 3 weeks of focus time. I remember his wife took the kids to England so he was free to do whatever he wanted. And he definitely had a lot of experience before writing what was first version UNIX.
Every programmer that has a project in mind should try this: Put away 3 weeks of focus time in a cabin, away from work and family, gather every book or document you need and cut off the Internet. Use a dumb phone if you can live with it. See how far you can go. Just make sure it is something that you already put a lot of thoughts and a bit of code into it.
After thinking more thoroughly about the idea, I believe low level projects that rely on as few external libraries as possible are the best ones to try the idea out. If your project relies on piles of 3rd party libraries, you are stuck if you have an issue but without the Internet to help you figure it out. Ken picked the right project too.
> low level projects that rely on as few external libraries
I think this is key. If you already have the architecture worked out in your head, then it's just smashing away at they keyboard. Once you have a 3rd party library, you can spend most of your time fighting with and learning about that.
Exactly. Both projects mentioned in this thread (UNIX, Git) have clean cuts visions of what the authors wanted to achieve from the beginning. Nowadays it is almost impossible to FIND such a project. I'm not saying that you can't write another Git or UNIX but most likely you won't even bother using it yourself, so what's the point? That's why I think "research projects" don't fit here -- you learn something and then you throw them away.
What I have in mind are embedded projects -- you are probably going to use it even when you are the only user. So that fixes the motivation issue. You probably have a clean cut objective so that clicks the other checkbox. You need to bring a dev board, a bunch of breadboards and electronics components to the cabin, but that doesn't take a lot of spaces. You need the specifications of the dev board and of the components used in the project, but those are just pdf files anyway. You need some C best practices? There must be a pdf for that. You can do a bit of experimental coding before you leave for the cabin, to make sure the idea is solid, feasible and the toolchain works. The preparations give you a wired up breadboard and maybe a few hundred lines of C code. That's all you need to complete the project in 3 weeks.
Game programming, modding and mapping come into my mind, too. They are fun, clean cut and well defined. The thing is you might need the Internet to check documents or algorithms from time to time. But it is a lot better to cut off Internet completely. I think they fit if you are well into them already -- and then you boost them up working 3 weeks in a cabin.
There must be other lower level projects that fit the bill. I'm NOT even a good, ordinary programmer, so the choices are few.
Back in my early career, the company I worked for needed an inventory system tailored to their unique process flow. Such system was already in development and was scheduled to launch "soon".
A few months went by and I got fed up with the toil. Sat down one weekend and implemented the whole thing in Django. I'm no genius and I managed to have a solution that my team used for a few years until the company had theirs launched. In a weekend.
Amazing what you can do when you want to Get Shit Done!
That's fine when it's self-motivated, but it sets a terrible precedent for expectations. Doing things like this can put in management's mind unrealistic expectations for you to always work at that pace. Which can be unhealthy and burnout-inducing.
I worked at a place in love with their ERP system. Some there had been using it 30+ years, since it ran in DOS.
My Excel skills completely blow, and I hate Microsoft with a passion, but I created a shared spreadsheet one long Saturday afternoon that had more functionality than our $80K annual ERP system. Showed it to a few more open-minded employees, then moved it to my server, never to be shown again. Just wanted to prove when I said the ERP system was pointless, that I was right.
A big one is the lack of peer reviews and processes, including team meetings, that would slow them down. No PM, no UX, just yourself and the keyboard with some goals in mind. No OKRs or tickets to close.
It's a bit like any early industry, from cars to airplanes to trains. Earlier models were made by a select few people, and there was several versions until today where GM and Ford have thousands of people involved in designing a single car iteration.
IMHO the biggest thing is that they were their own customer. There was no requirements gathering, ui/ux consultation, third party bug reporting, just like you said. They were eating their own dogfood and loving it. No overhead meant they could focus entirely on the task at hand.
We aren't talking about a very large amount of code here. Mainly the process was implementing several similar systems over the previous 10 years. You'd be surprised how much faster it is to write a program the fifth time, now that you know all the stuff you can leave out.
A lot of the supposed "features" we have in Unix nowadays are the result of artifacts resulting from primitive limitations, like dotfiles.
If you're willing to let everything crash if you stray from the happy path you can be remarkably productive. Likewise if you make your code work on one machine, on a text interface, with no other requirements except to deliver the exact things you need.
It is also the case that the first 80% of a projects functionality goes really quickly. Especially when you are interested and highly motivated about the project. That remaining 20% though. That is a long tail, it tends to be a huge slog that kills your motivation.
If I write a bunch of tests for new code, and all of them pass on the first attempt, I'm immediately suspicious of a far more egregious bug hiding somewhere…
Where feasible, I like to start a suite with a unit test that validates the unit's intended side effects actually occur, as visible in their mocks being exercised.
Sure. For Patreon subscribers at the $5/month tier and up, I also have a course on making integration ("e2e", "functional") tests more maintainable by eliminating side effects.
/bin/true used to be an empty file. On my desktop here, it's 35K (not counting shared libraries), which is an asolute increase of 35K and a relative increase of ∞%.
I am joking of course, git is pretty great, well half-joking, what is it about linux that it attracts such terrible interfaces. git vs hg, iptables vs pf. there is a lot of technical excellence present, marred by a substandard interface.
I'd argue that ordinary programmers can perform the same *type* of exercises if they:
- Put away a few weeks and go into Hermit mode;
- Plan ahead what projects they have in mind, which books/documents to bring with them. Do enough research and a bit of experimental coding beforehand;
- Reduce distraction to minimum. No Internet. Dumb phone only. Bring a Garmin GPS if needed. No calls from family members;
I wouldn't be surprised if they could up-level skills and complete a tough project in three weeks. Surely they won't write a UNIX or Git, but a demanding project is feasible with researches allocated before they went into Hermit mode.
I think so. I don't think Ken had zero thought about UNIX and then suddenly came up with a minimum but complete solution in under 3 weeks. Previous experience also tells a lot too. Wozniak was able to quickly design some electronics, but he probably already bagged 10,000 hours (just to borrow the popular metaphor) before he joined HP.
They both had been working on the Multics project for Bell Labs before they pulled out of the project and had written several languages already.
While some ideas like hierarchical filesystems were new it was mainly a modernized version of CTSS according to Dennis Ritchie's paper "The UNIX Time-sharing SystemA Retrospective"
I was playing with this version on simh way too late last night, taking a break from ITS, and being very familiar with v7 2.11 etc.. It is quite clearly very cut down.
I think being written in Assembly, which they produced by copying the DEC PAL-11R helped a lot.
But yet...work for years making an ultra complex OS that intended to provide 'utility scale' compute, and writing a fairly simple OS for a tiny mini would be much easier....if not so for us mortals.
It isn't like they just came out of a code boot camp...they needed the tacit knowledge and experience to push out 100K+ lines in one year from two people over 300bps terminals etc...
Compiling an emulator is quite easy: have a look at simh. It's very portable and should just work out of the box.
Once you've got that working, try installing a 2.11BSD distribution. It's well-documented and came after a lot of the churn in early Unix. After that, I've had great fun playing with RT-11, to the point that I've actually written some small apps on it.
The PDP-11/03 emulator itself is good enough that it can run the RT-11 installer to create the disk image you see in the browser version. The VT240 emulator is good enough that the standalone Linux version can be used as terminal emulator for daily work. Once I have time, I plan to make a proper blog post describing how it all works / what the challenges were and post it as Show HN eventually.
The daves garage youtube has an episode where he documents the pitfalls of compiling 2bsd for a PDP-11/83. https://www.youtube.com/watch?v=IBFeM-sa2YY basically it is an art on a memory constrained system.
What I found entertaining was when he was explaining how to compile the kernel, I went Oh! that's where openbsd gets it from. it is still a very similar process.
I've been messing around with RSX-11M myself! I find these early OSes quite fascinating. So far I've set up DECNet with another emulator running VMS, installed a TCP stack, and a bunch of compilers.
> It's somewhat picky about the environment. So far, aap's PDP-11/20 emulator (https://github.com/aap/pdp11) is the only one capable of booting the kernel. SIMH and Ersatz-11 both hang before reaching the login prompt. This makes installation from the s1/s2 tapes difficult, as aap's emulator does not support the TC11. The intended installation process involves booting from s1 and restoring files from s2.
good luck though. my emulator is not particularly user friendly, as in, it has no user interface. i recommend simh (although perhaps not for this thing in particular).
There is LOADS of gray area, overlap, and room for one's own philosophical interpretation... But typically simulators attempt to reproduce the details of how a particular machine worked for academic or engineering purposes, while emulators are concerned mainly with only getting the desired output. (Everything else being an implementation detail.)
E.g. since the MAME project considers itself living documentation of arcade hardware, it would be more properly classified as a simulator. While the goal of most other video game emulators is just to play the games.
I don't want to offend you , but this has made me even wonder more what the difference is.
It just feels that one is emulator if its philosophy is "it just works"
and simulator if "well sit down kids I am going to give you proper documentation and how it was built back in my days"
but I wonder what that means for programs themselves...
I wonder if simulator==emulator is more truer than what javascript true conditions allow.
Irrelevant to the concept being expressed, and does not invalidate.
The goals merely overlap, which is obvious. Equally obviously, if two goals are similar, then the implimentations of some way to attain those goals may equally have some overlap, maybe even a lot of overlap. And yet the goals are different, and it is useful to have words that express aspects of things that aren't apparent from merely the final object.
A decorative brick and a structural brick may both be the same physical brick, yet if the goals are different then any similarity in the implimentation is just a coincidense. It would not be true to say that the definition of a decorative brick includes the materials and manufacturing steps and final physical properties of a structural brick. The definition of a decorative brick is to create a certain appearance, by any means you want, and it just so happens that maybe the simplest way to make a wall that looks like a brick wall, is to build an actual brick wall.
If only they had tried to make it clear that there is overlap and the definitions are grey and fuzzy and open to personal philosophic interpretation and the one thing can often look and smell and taste almost the same as the other thing, if only they had said anything at all about that, it might have headed off such a pointless confusion...
Huh? I didn't mention anything about accuracy. And "accuracy" (an overloaded and ill-defined term on its own) doesn't have anything to do with the differences between simulators and emulators.
In theory, an emulator is oriented around producing a result (this may mean making acceptable compromises), whereas a simulator is oriented around inspection of state (this usually means being exact).
I assume GP meant that a lot of compilers also interpret and interpreters also compile.
For compilers, constant folding is a pretty obvious optimization. Instead of compiling constant expressions, like 1+2, to code that evaluates those expressions, the compiler can already evaluate it itself and just produce the final result, in this case 3.
Then, some language features require compilers to perform some interpretation, either explicitly like C++'s constexpr, or implicitly, like type checking.
Likewise, interpreters can do some compilation. You already mentioned bytecode. Producing the bytecode is a form of compilation. Incidentally, you can skip the bytecode and interpret a program by, for example, walking its abstract syntax tree.
Also, compilers don't necessarily create binaries that are immediately runnable. Java's compiler, for example, produces JVM bytecode, which requires a JVM to be run. And TypeScript's compiler outputs JavaScript.
Programming languages mostly occupy a 4-dimensional space at runtime. These axes are actually a bit more complicated than just a line:
* The first axis is static vs dynamic types. Java is mostly statically-typed (though casting remains common and generics have some awkward spots); Python is entirely dynamically-typed at runtime (external static type-checkers do not affect this).
* The second axis is AOT vs JIT. Java has two phases - a trivial AOT bytecode compilation, then an incredibly advanced non-cached runtime native JIT (as opposed to the shitty tracing JIT that dynamically-typed languages have to settle for); Python traditionally has an automatically-cached barely-AOT bytecode compiler but nothing else (it has been making steps toward runtime JIT stuff, but poor decisions elsewhere limit the effectiveness).
* The third axis is indirect vs inlined objects. Java and Python both force all objects to be indirect, though they differ in terms of primitives. Java has been trying to add support for value types for decades, but the implementation is badly designed; this is one place where C# is a clear winner. Java can sometimes inline stack-local objects though.
* The fourth axis is deterministic memory management vs garbage collection. Java and Python both have GC, though in practice Python is semi-deterministic, and the language has a somewhat easier way to make it more deterministic (`with`, though it is subject to unfixable race conditions)
The easy definition is that an interpreter takes somethings and runs/executes it.
A compiler takes the same thing, but produces an intermediate form (byte code, machine code, another languages sometimes called "transpilar"). That you can then pass through an interpreter of sorts.
There is no difference between Java and JVM, and Python and the Python Virtual Machine, or even a C compiler targeting x86 and a x86 CPU. One might call some byte code, and the other machine code .. they do the same thing.
While an interpreter can do optimizations, they do not produce "byte code" -- by that time they are compilers!
As for the comparison with the JVM .. compare to a compiler that produces x86 code, it cannot be run without an x86 machine. You need a machine to run something, be it virtual or not.
I would generalize it to a compiler produces some sort of artifact that is intended to later be used directly, while for an interpreter the whole mechanism(source to execution) is intended to be used directly.
The same tool can often be used to do both. trival example: a web browser. save your web page as a pdf? compiler. otherwise interpreter. but what if the code it is executing is not artisanal handcrafted js but the result of a typescript compiler?
Adding some anecdata, I feel like emulator is mainly used in the context of gaming, in which case they actually care a great deal about accurate reproduction (see: assembly bugs in N64 emulators that had to be reproduced in order to build TAS). I haven't seen it used much for old architectures; instead I'd call those virtual machines.
I think it is more about design, emulation mimics what something does. A simulator replicates what something does.
It is a tiny distinction, but generally I'd say that a simulator tries to accurately replicate what happens on an electrical level as good one can do.
While an emulator just does things as a black box ... input produces the expected output using whatever.
You could compare it to that an accurate simulator of a 74181 tries to do it by using AND/OR/NOT/... logic, but an emulator does it using "normal code".
In HDL you have a similar situation between structural, behavioral design ... structural is generally based on much more lower level logic (eg., AND/NOR/.. gates ...), and behavioral on higher logic (addition, subtraction ...).
"100%" accuracy can be achieved with both methods.
I wonder who else has to deal with ed also... recently I had to connect to an ancient system where vi was not available, I had to write my own editor, so whoever needs an editor for an ancient system, ping me (it is not too fancy).
amazing work by the creators of this software and by the researchers, you have my full respect guys. those are the real engineers!
I remember using an ed-like editor on a Honeywell timeshare system in the 1960s, over a Teltype ASR-33. I don't remember much except you invoked it using "make <filename>" to create a new file. And if you typed "make love" the editor would print "not war" before entering the editor.
The “MAKE LOVE”/“NOT WAR” easter egg was in TECO for DEC PDP-6/10 machines. But DEC TECO was also ported to Multics, so maybe that was the Honeywell machine you used it on.
But, for a whole bunch of reasons, I’m left with the suspicion you may be misremembering something from the early 1970s as happening in the 1960s. While it isn’t totally impossible you had this experience in 1968 or 1969, a 1970s date would be much more historically probable
The easter egg carried over to the PDP-11 as well. I remember it being present in RSTS/E 7.0's TECO back in my high school days, and I just fired up SIMH and found it's definitely there.
On the other hand, I never really tried to do anything with TECO other than run VTEDIT.
I also remember using MS-DOS 3.3 EDLIN in anger, on our home computer [0] when I was roughly 8, because it was the only general purpose text editor we had. (We also had Wordstar, which I believe could save files in plain text mode, but I don’t think my dad or I knew that at the time.) I didn’t do much with it but used it to write some simple batch files. My dad had created a directory called C:\BAT and we used it a bit like a menu system, we put batch files in it to start other programs. I don’t remember any PC-compatible machines at my school, it was pretty much all Apple IIs, although the next year moved to a new school which as well as Apple IIs, also had IBM PC JXs (IBM Japan variant of the IBM PCjr which was sold to schools in Australia/New Zealand) and Acorn Archimedes.
[0] it was an IBM PC clone, an ISA bus 386SX, made by TPG - TPG are now one of Australia’s leading ISPs, but in the late 1980s were a PC clone manufacturer. It had a 40Mb hard disk, two 5.25 inch floppy drives (one 1.2Mb, the other 360Kb), and a vacant slot for a 3.5 inch floppy, we didn’t actually install the floppy in it until later. I still have it, but some of the innards were replaced, I think the motherboard currently in it is a 486 or Pentium
In the mid 90s we had an AT&T 3B2 that only had ed on it. We used it via DEC VT-102 terminals. It (ed) works but it’s not fun by any modern standards. Must’ve been amazing on a screen compared to printout from a teletype though!
Side note: that ~1 MIP 3B2 could support about 20 simultaneous users…
An early consulting gig was to write a tutorial for ed (on the Coherent system). I often use ed--in fact I used it yesterday. I needed to edit something without clearing the screen.
Earlier, I wrote an editor for card images stored on disks. Very primitive.
I used ed in Termux on my cellphone to write http://canonical.org/~kragen/sw/dev3/justhash.c in August. Someone, I forget who, had mentioned they were using ed on their cellphone because the Android onscreen keyboard was pretty terrible for vi, which is true. So I tried it. I decided that, on the cellphone, ed was a little bit worse than vi, but they are bad in different ways. It really is much easier to issue commands to ed than to vi on the keyboard (I'm using HeliBoard) but a few times I got confused about the state of the buffer in a way that I wouldn't with vi. Possibly that problem would improve with practice, but I went back to using vi.
The keystokes are pretty much what you'd press in vim to perform the same actions. Except that append mode ends when they finished the line (apparently) rather than having to press Esc.
The feedback from the editor, however, is… challenging.
That's possible but unlikely. MTP as defined by Suzanne Sluizer and Jon Postel in RFC 772 in September 01980 https://datatracker.ietf.org/doc/html/rfc772 seems to have been where SMTP got that convention for ending the message:
> ...and considers all succeeding lines to be the message text. It is terminated by a line containing only a period, upon which a 250 completion reply is returned.
But in 01980 Unix had only been released outside of Bell Labs for five years and was only starting to support ARPANET connections (using NCP), so I wouldn't expect it to be very influential on ARPANET protocol design yet. I believe both Sluizer and Postel were using TOPS-20; the next year the two of them wrote RFC 786 about an interface used under TOPS-20 at ISI (Postel's institution, not sure if Sluizer was also there) between MTP and NIMAIL.
For some context, RFC 765, the June 01980 version of FTP, extensively discusses the TOPS-20 file structure, mentions NLS in passing, and mentions no other operating systems in that section at all. In another section, it discusses how different hardware typically handles ASCII:
> For example, NVT-ASCII has different data storage representations in different systems. PDP-10's generally store NVT-ASCII as five 7-bit ASCII characters, left-justified in a 36-bit word. 360's store NVT-ASCII as 8-bit EBCDIC codes. Multics stores NVT-ASCII as four 9-bit characters in a 36-bit word. It may be desirable to convert characters into the standard NVT-ASCII representation when transmitting text between dissimilar systems.
Note the complete absence of either of the hardware platforms Unix could run on in this list!
(Technically Multics is software, not hardware, but it only ever ran on a single hardware platform, which was built for it.)
RFC 771, Cerf and Postel's "mail transition plan", admits, "In the following, the discussion will be hoplessly [sic] TOPS20[sic]-oriented. We appologize [sic] to users of other systems, but we feel it is better to discuss examples we know than to attempt to be abstract."
RFC 773, Cerf's comments on the mail service transition plan, likewise mentions TOPS-20 but not Unix. RFC 775, from December 01980, is about Unix, and in particular, adding hierarchical directory support to FTP:
> BBN has installed and maintains the software of several DEC PDP-11s running the Unix operating system. Since Unix has a tree-like directory structure, in which directories are as easy to manipulate as ordinary files, we have found it convenient to expand the FTP servers on these machines to include commands which deal with the creation of directories. Since there are other hosts on the ARPA net which have tree-like directories, including Tops-20 and Multics, we have tried to make these commands as general as possible.
RFC 776 (January 01981) has the email addresses of everyone who was a contact person for an Internet Assigned Number, such as JHaverty@BBN-Unix, Hornig@MIT-Multics, and Mathis@SRI-KL (a KL-10 which I think was running TOPS-20). I think four of the hosts mentioned are Unix machines.
So, there was certainly contact between the Unix world and the internet world at that point, but the internet world was almost entirely non-Unix, and so tended to follow other cultural conventions. That's why, to this day, commands in SMTP and header lines in HTTP/1.1 are terminated by CRLF and not LF; why FTP and SMTP commands are all four letters long and case-insensitive; and why reply codes are three-digit hierarchical identifiers.
So I suspect the convention of terminating input with "." on a line of its own got into ed(1) and SMTP from a common ancestor.
I think Sluizer is still alive. (I suspect I met her around 01993, though I don't remember any details.) Maybe we could ask her.
Oh wow, really? I didn't look because I assumed mail over FTP was transferred over a separate data connection, just like other files. Thank you!
And yes, in August 01972 probably nobody at MIT had ever used ed(1) at Bell Labs. Not impossible, but unlikely; in June, Ritchie had written, "[T]he number of UNIX installations has grown to 10, with more expected." But nothing about it had been published outside Bell Labs.
The rationale is interesting:
> The 'MLFL' command for network mail, though a useful and essential addition to the FTP command repertoire, does not allow TIP users to send mail conveniently without using third hosts. It would be more convenient for TIP users to send mail over the TELNET connection instead of the data connection as provided by the 'MLFL' command.
So that's why they added the MAIL command to FTP, later moved to MTP and then in SMTP split into MAIL, RCPT, and DATA, which still retains the terminating "CRLF.CRLF".
> A Terminal Interface Processor (TIP, for short) was a customized IMP variant added to the ARPANET not too long after it was initially deployed. In addition to all the usual IMP functionality (including connection of host computers to the ARPANET), they also provided groups of serial lines to which could be attached terminals, which allowed users at the terminals access to the hosts attached to the ARPANET.
> They were built on Honeywell 316 minicomputers, a later and un-ruggedized variant of the Honeywell 516 minicomputers used in the original IMPs. They used the TELNET protocol, running on top of NCP.
I had to use ed to configure X on my alpha/vms machine back when I had it, there was something wrong with the terminfo setup so visual editors didn't work, only line-based programs.
Interestingly it's actually a sort of degenerate use of ed. All it does is append one line to an empty buffer and write it to "hello.c". It's literally the equivalent of
It's not, because the shell redirection operators didn't exist yet at this point in time. Maybe (or maybe not?) it would work to cat to the file from stdin and send a Ctrl-D down the line to close the descriptor. But even that might have been present yet. Unix didn't really "look like Unix" until v7, which introduced the Bourne shell and most of the shell environment we know today.
I love browsing the tuhs mailing list from time to time. Awesome to see names like Ken Thompson and Rob Pike, and a bunch of others with perhaps less recognizable names but who were involved in the early UNIX and computing scene.
One of the many things I dislike about the SaaS era is that this will never happen. Nobody in 2075 will boot up an old version of Notion or Figma for research or nostalgia.
Like the culture produced and consumed on social media and many other manifestations of Internet culture it is perfectly ephemeral and disposable. No history, no future.
SaaS is not just closed but often effectively tied to a literal single installation. It could be archived and booted up elsewhere but this would be a much larger undertaking, especially years later without the original team, than booting 1972 Unix on a modern PC in an emulator. That had manuals and was designed to be installed and run in more than one deployment. SaaS is a plate of slop that can only be deployed by its authors, not necessarily by design but because there are no evolutionary pressures pushing it to be anything else. It's also often tangled up with other SaaS that it uses internally. You'd have to archive and restore the entire state of the cloud, as if it's one global computer running proprietary software being edited in place.
Can anyone provide a reference on what those file permissions mean? I can make a guess but when I searched around, could not find anything about unix v2 permissions. ls output looks so familiar, except for the sdrwrw!
Pretty interesting. I guess it was way later, when they came up with the SUID semantics and appropriated the first character for symlinks (l) or setuid binaries (s)...
That reminded me of the compiler that used to include a large poem in every binary, just for shits and giggles. You've heard of a magic number, it had a magic sonnet.
I thought it was early versions of the Rust compiler, but I can't seem to find any references to it. Maybe it was Go?
When gasoline was leaded, cigarette smoke was normal everywhere, and asbestos was used for everything you can think of? It is a fascinating decade but also quality of life likely has skyrocketed since.
Depends on what you value. Purchasing power of wages has declined, for example. That's probably not better.
I suspect the sentiment is more that it would be nice to live in a simpler time, with fewer options, because it would reduce anxiety we all feel about not being able to "keep up" with everything that is going on. Or maybe I'm just projecting.
I mean... Sure? Go buy an actual VT* unit ( maybe https://www.ebay.com/itm/176698465415?_skw=vt+terminal&itmme... ?), get the necessary adaptors to plug into a computer, and run simh on it running your choice of *nix. I recommend https://jstn.tumblr.com/post/8692501831 as a reference. Once you have it working, shove the host machine behind a desk or otherwise out of sight, and you can live like it's 1980.
The only problem with real VTs is you have to be careful not to get one where the CRT has severe burn-in, like in the ebay listing. Sure, some VTs (like the VT240 or VT525) are a separate main box + CRT, but then you're missing the "VT aesthetics". The VT525 is probably the easiest one to get which also uses (old) standard interfaces like VGA for the monitor and PS/2 for the keyboard, so you don't need an original keyboard / CRT. At least for me, severe burn-in, insane prices, and general decay of some of the devices offered on ebay are the reason why I don't have a real VT (yet).
The alternative is to use a decent VT emulator attached to roughly any monitor. By "decent" I certainly don't mean projects like cool-retro-term, but rather something like this, which I started to develop some time ago and which I'm using as my main terminal emulator now: https://github.com/unknown-technologies/vt240
There is firmware available online for some terminals; you could potentially get a lot more accuracy in emulating the actual firmware, but I'm sure a lot of that code gets into the guts of timing CRT cycles and other "real-world" difficulties. I'm not suggesting this would be easy to build out, just pointing out that it's available. While I haven't searched for the VT240 firmware, the firmware for the 8031AH CPU inside the VT420 (and a few other DEC terminals) is available on bitsavers. The VT240 has a T-11 processor, which is actually a PDP-11-on-a-chip.
Actually I have the VT240 firmware ROM dumps, that's where I got the original font from. The problem is, at least the VT240 is a rather sophisticated thing, with a T-11 CPU, some additional MCU, and a graphics accelerator chip. There is an extensive service manual available, with schematics and everything, but properly emulating the whole firmware + all relevant peripherals is non-trivial and a significant amount of work. The result is then a rather slow virtual terminal.
There is a basic and totally incomplete version of a VT240 in MAME though, which is good enough to test certain behavior, but it completely lacks the graphics part, so you can't use it to check graphics behavior like DRCS and so on.
EDIT: I also know for sure that there is a firmware emulation of the VT102 available somewhere.
Ha, I just bought a VT420 a couple of weeks ago. I just finished a hacked together converter for USB keyboards working well enough (in the last hour actually). Next job is to connect it up as a login terminal for my freebsd machine.
Recovering RF tapes, even a simple text file demonstrates buffer space that is not being used by the dos, or .iso file. Even in a 2.11 BSD distro, a default tiling and window manager has to be installed on the native OS. So yes, going with KDE or the X11 wm.
reply