221 lines
13 KiB
HTML
221 lines
13 KiB
HTML
<!DOCTYPE html>
|
|
<html>
|
|
|
|
<head>
|
|
<title>Contact Me</title>
|
|
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
|
<meta name="description" content="">
|
|
<meta name="keywords" content="">
|
|
<meta name="author" content="George Wilkinson">
|
|
<meta http-equiv="content-type" content="text/html; charset=utf-8">
|
|
<meta http-equiv="default-style" content="./src/styles/style.css">
|
|
<link rel="stylesheet" href="./src/styles/index.css" content="text/css">
|
|
<link rel="stylesheet" href="./src/styles/index.css" content="text/css">
|
|
<link rel="stylesheet" href="./src/styles/projects.css" content="text/css">
|
|
</head>
|
|
|
|
<body>
|
|
<!-- Main Content-->
|
|
<div id="main">
|
|
|
|
<!-- Top Bar -->
|
|
<div id="top-bar">
|
|
|
|
<!-- Nav Bar -->
|
|
<div id="toggle-navbar">
|
|
|
|
<input type="checkbox">
|
|
<div></div>
|
|
<div></div>
|
|
<div></div>
|
|
|
|
<ul>
|
|
<a href="./index.html"><li>Home</li><div></div></a>
|
|
<a href="./projects.html"><li>Projects</li><div></div></a>
|
|
<a href="./contact.html"><li>Contact</li><div></div></a>
|
|
</ul>
|
|
</div>
|
|
|
|
<!-- Content of Top Bar -->
|
|
<div id="top-content">
|
|
|
|
<!-- Title Header -->
|
|
<div id="title-header">
|
|
<header>
|
|
<h1>George Wilkinson</h1>
|
|
<h2>Projects</h2>
|
|
</header>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- Content Start -->
|
|
<div id="content">
|
|
|
|
<div class="card-divider" id="prox-header">
|
|
<h2>Main Proxmox Hypervisor Node</h2>
|
|
</div>
|
|
|
|
<div class="card" id="prox-build">
|
|
<div class="card-header">
|
|
<header>
|
|
<h3>Building</h3>
|
|
</header>
|
|
</div>
|
|
<div class="card-content">
|
|
<p>
|
|
My main proxmox node is currently my largest project to date. I needed bulk network storage and local hypervisor compute
|
|
that was low power, cost effective, and quiet. I chose to build on a consumer platform with an enterprise chassis,
|
|
so I would have the best of both worlds for efficiency, form factor and cooling. Built on the last generation of
|
|
the AM4 platform, I was able to build the compute end of the project for about £320. This included a
|
|
<a href="https://www.amd.com/en/products/apu/amd-ryzen-7-5700g">Ryzen 5700G</a>,
|
|
<a href="https://www.asrock.com/mb/AMD/B550%20Pro4/index.asp">ASRock B550</a>,
|
|
<a href="https://www.corsair.com/uk/en/c/memory/ddr4-ram">2x32GB + 2x16GB Corsair DDR4</a> and a
|
|
<a href="https://noctua.at/en/nh-l9a-am4">Noctua L9a</a>. For the chassis, I ended up using a
|
|
<a href="https://www.supermicro.com/products/chassis/2U/?chs=825">SuperMicro CSE-825</a>, which is
|
|
compatible with full-size ATX consumer board sizes, and has 8x3.5" hot-swap drive bays. The final piece of the project
|
|
was storage - I needed a large amount of redundant storage with fast reads, something that isn't usually cheap. I
|
|
managed to pick up 10 4TB
|
|
<a href="https://documents.westerndigital.com/content/dam/doc-library/en_us/assets/public/western-digital/product/data-center-drives/ultrastar-sata-series/data-sheet-ultrastar-7k4000.pdf">HGST SAS drives</a>
|
|
from an e-waste company with around 50k hours on each drive. Since my chassis can hold 8 at a time, this leaves me
|
|
with 2 cold spares if any choose to fail. At the same time I bought a 2 port Mini-SAS 8087
|
|
<a href="https://docs.broadcom.com/doc/12353331">LSI 9207-8i PCIe HBA</a> to connect the chassis backplane to my motherboard. On top
|
|
of this hardware, I of course chose to use <a href="https://proxmox.com">Proxmox</a>, as it is a FOSS
|
|
operating system and supports ZFS natively, unlike some alternatives like <a href=https://www.vmware.com/uk/products/esxi-and-esx.html">EXSi</a>.
|
|
</p>
|
|
</div>
|
|
<div class="card-footer" style="flex-direction: column;">
|
|
<figure>
|
|
<figure>
|
|
<a target="_blank" href="./src/images/proxmox-01.jpg"><img src="./src/images/proxmox-01.jpg" alt="Server Internals"/></a>
|
|
<figcaption>fig 1. Completed Build in the rack</figcaption>
|
|
</figure>
|
|
</div>
|
|
</div>
|
|
|
|
<div class="card" id="prox-storage">
|
|
<div class="card-header">
|
|
<header><h2>Storage</h2></header>
|
|
</div>
|
|
<div class="card-content">
|
|
<p>Set up in a ZFS Striped Mirror array, I achieve 16TB of usable space out of 32TB, with 1-4 drive failure depending
|
|
on which drive fails. 50% usable space is quite the loss, but all the negatives of decreased space are made up by
|
|
the massive random IO/s increases, giving me much higher performance for cloud storage and app performance. To give some
|
|
reference, the highest speed I have seen is 1.6GB/s (12800Mbps), around 180x the average residential UK download speed
|
|
( according to Virgin Media: <a href="https://www.virginmedia.com/blog/broadband/average-broadband-speed">Here</a> ).
|
|
</p>
|
|
</div>
|
|
<div class="card-footer">
|
|
<figure>
|
|
<a target="_blank" href="./src/images/proxmox-02.jpg"><img src="./src/images/proxmox-02.jpg" alt="Proxmox Host Array"/></a>
|
|
<a target="_blank" href="./src/images/proxmox-03.jpg"><img src="./src/images/proxmox-03.jpg" alt="Proxmox Host Storage"/></a>
|
|
<figcaption>fig 2. ZFS Striped Mirror Array. fig 3. Storage Displayed in Panel.</figcaption>
|
|
</figure>
|
|
</div>
|
|
</div>
|
|
|
|
<!-- Card 3 -->
|
|
<div class="card" id="prox-features">
|
|
<div class="card-header">
|
|
<header><h2>VM List</h2></header>
|
|
</div>
|
|
<div class="card-content">
|
|
<p>
|
|
<ul id="vm-list">
|
|
<li><section class="collapse-list">
|
|
<input type="checkbox" name="collapse-list-item" id="vm-01" />
|
|
<label for="vm-01">OpenMediaVault ( OMV )</label>
|
|
<section class="collapse-list-content">
|
|
<h5>4CPU, 8GB RAM</h5>
|
|
<p>Under OMV, I import a virtual disk for network attached storage. This is used by other servers and clients
|
|
in my house, and in some cases outside of my house. This is due to Proxmox having a less than ideal solution
|
|
to network sharing, and can cause instability. OMV has a clean network GUI where I can configure network shares
|
|
and expand their size on the fly, which is useful in a dynamic environment like this.
|
|
</p>
|
|
</section>
|
|
</section></li>
|
|
|
|
<li><section class="collapse-list">
|
|
<input type="checkbox" name="collapse-list-item" id="vm-02" />
|
|
<label for="vm-02">Ubuntu Server ( Docker )</label>
|
|
<section class="collapse-list-content">
|
|
<h5>4CPU, 30GB RAM</h5>
|
|
<p>
|
|
Under this Ubuntu VM, I run a single node Docker stack for ~50 containers, including
|
|
<ul id="container-list">
|
|
<li>NGINX WebServer & Proxy Manager<br/>For hosting web applications through a reverse proxy at a datacentre.</li>
|
|
<li>Authentik<br/>Provides Proxy, OAuth2 and LDAP configuration for web applications.</li>
|
|
<li>Gitea<br/>Hosts a personal Git repository for projects.</li>
|
|
<li>Grafana & InfluxDB<br/>Provides real-time monitoring and logging of device metrics.</li>
|
|
<li>Immich<br/>Google Photos alternative, entirely self-hosted and open source.</li>
|
|
<li>IPv6NAT<br/>Provides an address translation service to allow for a fully IPv6 docker stack.</li>
|
|
<li>VaultWarden<br/>Fully self-hosted, lightweight password manager.</li>
|
|
</ul>
|
|
</p>
|
|
</section>
|
|
</section></li>
|
|
|
|
<li><section class="collapse-list">
|
|
<input type="checkbox" name="collapse-list-item" id="vm-03" />
|
|
<label for="vm-03">Home Assistant OS</label>
|
|
<section class="collapse-list-content">
|
|
<h5>2CPU, 4GB RAM</h5>
|
|
<p>
|
|
With HaOS, I have set up integrations with several IoT devices on my network, such as TP-Link Tapo bulbs, light strips, etc.
|
|
I am also working on integration with Grafana, Frigate & CCTV cameras to provide a centralised app to control & monitor
|
|
smart home devices. I used to run Home Assistant dockerised on my Docker VM, but I found the VM was much better supported
|
|
and stable.
|
|
</p>
|
|
</section>
|
|
</section></li>
|
|
|
|
<li><section class="collapse-list">
|
|
<input type="checkbox" name="collapse-list-item" id="vm-04" />
|
|
<label for="vm-04">Proxmox Backup Server ( PBS ) ( In Progress )</label>
|
|
<section class="collapse-list-content">
|
|
<h5>1CPU, 4GB RAM</h5>
|
|
<p>
|
|
While I already backup my Virtual Machines to an external server, using some storage on a friend's Proxmox node,
|
|
using PBS locally, I can backup & snapshot my Virtual Machines to a different drive on my machine. This isn't ideal from
|
|
a 3-2-1 perspective but having frequent local rolling backups can be incredibly useful if anything were to go wrong inside
|
|
the VM itself. I am currently working on implementing a good local storage solution to store my images, so for now it is
|
|
turned off.
|
|
</p>
|
|
</section>
|
|
</section></li>
|
|
|
|
<li><section class="collapse-list">
|
|
<input type="checkbox" name="collapse-list-item" id="vm-05" />
|
|
<label for="vm-05">Windows Server 2019</label>
|
|
<section class="collapse-list-content">
|
|
<h5>4CPU, 4GB RAM</h5>
|
|
<p>
|
|
Using an evaluation release of Windows Server, I use this to perform any operations / run any programs needed on this node
|
|
that cannot be done on linux. This is rare, but useful when required. I have also used this in the past to have a graphical
|
|
Windows environment outside of my network to configure network settings, since my Proxmox panel is behind a reverse proxy.
|
|
</p>
|
|
</section>
|
|
</section></li>
|
|
</ul>
|
|
</p>
|
|
</div>
|
|
<div class="card-footer">
|
|
<figure>
|
|
<a target="_blank" href="./src/images/proxmox-04.jpg"><img src="./src/images/proxmox-04.jpg" alt="Proxmox Host Panel"/></a>
|
|
<figcaption>fig 4. VM List & Host Summary in Panel.</figcaption>
|
|
</figure>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
</div>
|
|
<!-- Content End -->
|
|
|
|
</body>
|
|
|
|
<footer>
|
|
<p>By George Wilkinson</p>
|
|
<p>Date Modified: Fri 24th Nov</p>
|
|
</footer>
|
|
|
|
</html> |