• Please review our updated Terms and Rules here

OpenVMS on x86 becoming available for the hobbyist program

They are rolling out the x86 PAKs to those that already have joined the Community License Program (CLP). Start by joing that. They are doing members in batches each week as they have about 1000 members. I'm not sure how far ahead of notification they provision your account on the SR but people have tried before getting notified and found they have access. If you don't already have a working account then you can put your email address that VSI knows you by (when you applied to the CLP) and request a password reset. If you have been provisioned, you will get an email witht eh reset link and proceed from there.
 
Right now I have one instance of OpenVMS E9.2-1 running on an Intel NUC 9 Extreme with a i9 process, 64GB memory and 5TB of storage. At some point I will make a VMScluster with 1 or 2 more x86 instances just to try it out. RIght now I'm working on getting DECnet going to connect to my two DS10 with VSI OpenVMS V8.4-2L1

It seems to run pretty snappy so far but I haven't done any heavy duty testing yet.

 
Right now I have one instance of OpenVMS E9.2-1 running on an Intel NUC 9 Extreme with a i9 process, 64GB memory and 5TB of storage. At some point I will make a VMScluster with 1 or 2 more x86 instances just to try it out. RIght now I'm working on getting DECnet going to connect to my two DS10 with VSI OpenVMS V8.4-2L1

Are you virtualizing it, or running it on bare metal?
 
Ah, sorry. The NUC is running VMWare ESXi V7.0u3. OpenVMS x86 is a 2-cpu, 12GB VM with a 24GB main disk and a 16GB secondary disk. Although it's supposed to be coming in the future, OpenVMS x86 on bare metal isn't supported yet. Only VMWare (ESXi, Workstation or Fusion), Oracle VirtualBox or KVM is supported right now. People have gotten it running under different deployments of KVM including Proxmox.
 
Little off topic, but can someone speak to the advantage of VAXCluster? I was reading a paper from the 80's describing it and it sounded mostly like a high performance interconnect, but operationally, from an application perspective, it wasn't really anything much more than a nice, shared filesystem, and perhaps some distributed inter process communication facilities (i.e. mailboxes).
 
Little off topic, but can someone speak to the advantage of VAXCluster? I was reading a paper from the 80's describing it and it sounded mostly like a high performance interconnect, but operationally, from an application perspective, it wasn't really anything much more than a nice, shared filesystem, and perhaps some distributed inter process communication facilities (i.e. mailboxes).
The defining thing about a VAX cluster was that a single physical disks can be directly attached to and available read/write on multiple systems, unlike Windows/NT clusters (and their descendants) where a shared disk is is only ever accessible on a single system at one moment in time. So if you are building for hardware resilience on Windows/NT then on hardware failover there is a delay while the disk is made available on the backup server and services are restarted, which does not happen with VMS.

Note that this is not the same as sharing a disk over the network where the disk is attached to a single server which manages access. In both VMS and NT clusters theere are multiple host controllers which all connect to the disks which are stored in a separate enclosure. So typically fibre attached disk and a SAN but you can also use SCSI or DSSI. DSSI is DECs proprietry SCSI type interface.
 
Ah, sorry. The NUC is running VMWare ESXi V7.0u3. OpenVMS x86 is a 2-cpu, 12GB VM with a 24GB main disk and a 16GB secondary disk. Although it's supposed to be coming in the future, OpenVMS x86 on bare metal isn't supported yet. Only VMWare (ESXi, Workstation or Fusion), Oracle VirtualBox or KVM is supported right now. People have gotten it running under different deployments of KVM including Proxmox.

No problem. I successfully set up E9.2-1 under qemu on my Ubuntu Server. Took me a while to figure out to use virsh to connect to the console so that I could actually perform the install though.

It seems to work well here. I haven't installed the C/CC++ compilers yet though.
 
Back
Top