Game Complete! The Legend of Heroes: Trails in the Sky

So this is a thing I’ve been wanting to do over the years but never got around to doing it: Recording when I finish a game. I am terrible about finishing games, especially JRPGs, so I feel like I need to keep a record of the rare times it actually happens!

I just finished the JRPG, “Legend of Heroes: Trails in the Sky” (Steam/PC). I’m not going to do a review, but it is an excellent game. But I knew that going into it. Because this is the second time I’ve completed it! I actually played it when it was it initially released in the West on the Playstation Portable (PSP) 10-15yrs ago. I’m pretty sure I still own the UMD disc for my still working PSP.

So why replay this game? Because there’s a second and third chapter to it. I initially expected to play at least the second chapter on the PSP back in the day, but unfortunately it never released to PSP. Instead the second chapter went to the PS3, and I just never got around to playing it.

Then it was re-released on Steam in 2014, and the remaining chapters were finally released on Steam in 2015 and 2017. As such, the second and third chapter have been on my radar for a while. I recently picked up the additional chapters, but since it’s been so long since I’ve played the first chapter, that I’d forgotten most of the story, it made sense to simply replay it. And I’m glad I did.

Some details of this playthrough:

  • Installed: 2022-09-23
  • Start Date: 2022-09-23, est.
  • Time in-game based on Steam: 80.7 hrs
  • Time in-game based on Save Data: 60.5 hrs
  • Completed: 2022-10-26

So 60-80hrs over about a month. Not bad. Especially when most of my JRPGs can take me years to finish, if I even do finish them. I often restart them multiple times, because I’ll sometimes put a JRPG down for a few years and forget everything (Looking at you Final Fantasy XII…).

On to the second chapter! My goal is to finish that one and the third by the end of the year.

Afterwards, maybe I’ll move to some of the other LoH games that I’ve been working on over the years. The LoH series has quite a lot of games, much like the Final Fantasy series. “Trails” is just one subseries of LoH. I played all of and completed 2/3 of the so-called “Gagharv” subseries on PSP back in the day. I also completed the first chapter of the “Trails in the Sky” subseries on my Vita, and have been playing the second chapter on and off for the last few years. See what I mean?


One last thing…

Other games I’ve completed in 2022 so far:

  • Final Fantasy VII Remake (PS4) – Completed on 2nd restart.
  • Desperados III (Steam/PC) – Finished 2022-05-17; started it back in 2020 at the beginning of the pandemic.
  • The Great Ace Attorney Chronicles (Steam/PC), both first and second parts – Finished 2022-09-05; started 2021-07-30.

Homelab Chronicles 08: Investigating the Incident

(Continued from Homelab Chronicles 07).

From the moment I woke up, through work, and into the afternoon, I was constantly monitoring my network. I have the Unifi app on my phone, so it was easy to see the list of clients connected to WiFi. Luckily, nothing unexpected connected.

At this point, I assumed my neighborly adversary (adversarial neighbor?) knew they had been caught. The WiFi network they had connected to had disappeared overnight, as did my similarly named “main” one. In its place, a new SSID would pop up on “Emily’s iPad” when they tried to connect, with a name that wasn’t mean, crude, or insulting, but one with a subtle message that basically says, “I see you and I know what you did.”

I forgot to mention that my main WLAN has always used WPA2/WPA3 for authenticating. I think there are ways to crack WPA2, but I’ll get into that in the future.

Once I got home, I jumped back onto the Unifi Controller to see what information I could glean. Having “fancy” Ubiquiti Unifi gear means the platform records and stores a lot more information than the average household router. I mentioned in the previous Chronicle that the Controller can tell me the device manufacturer by looking up MAC addresses. I can also see connection histories. With packet sniffing and traffic analysis tools, I can also see general traffic usage, i.e. where they were going.

So when did they first get on my network?

Unifi gives alerts when devices disconnect and connect. I silenced these alerts, because that’d be annoying for as many devices I have, but it records these nonetheless. It also shows the last 5 times a device connected, along with a duration. Most of the unauthorized devices appeared to have connected within the last 10-14 days. However, I did see one device with a recorded connection date around 20 days ago. It was connected for 13 days straight. It had “Amazon” in the hostname, so I’m assuming it’s some kind of smart home device that’s always on.

Sadly, because my server, and therefore Controller, was turned off to save on AC and electricity costs, there are large gaps in the 2-3 month history. It’s possible the devices were connected further back than 20 days ago. But that “Amazon” device only had two entries; 20 days ago and then overnight when I powercycled the WAP. So I’m assuming that nearly 3 weeks ago was when they first cracked the password.

Where did they go or what did they do?

Naturally, my next bit of curiosity was wanting to know what they were doing while connected. I needed to know if the adversaries were doing illegal things. Were they engaged in piracy? I don’t need a(nother) copyright strike on my ISP records! I hope to god they weren’t doing anything more illegal than that.

The Insights tab for the blocked devices showed me generally what they were doing. And it was mostly mundane, everyday stuff. Lots of streaming content from YouTube, Netflix, Hulu, Spotify, etc. Looking at that “Amazon” device, I could see traffic entries for Hulu and Amazon Video. Maybe it’s not a smart home device, but instead a Fire Stick or Fire Tablet. Interestingly, I deduced they have a child: I found a traffic entry on a blocked device for Roblox, the popular kids game. I’m more of a Minecraft guy, myself.

Traffic sources for a single device in Unifi Console.
Roblox? Oof.

Looking at Internet traffic overall, I could see there were other devices that were connected prior to my discovery. The only ones I outright blocked were those that happened to be connected to WiFi at the time. There was traffic to Xbox gaming services, which was tied to a device with an appropriate hostname: XboxOne. It looked like they downloaded a game or update/patch since it was a sizable 1.75GB download.

List of top Internet traffic sources in Unifi Console.
Probably some Call of Duty “scrub noob.”

But overall, traffic was pretty low. Certainly not enough for me to notice Internet speed degradation. Helps that I have gigabit fiber.

It doesn’t look like they were engaging in torrenting of pirated material, but at the same time, I’m not familiar with how that would look in Unifi. There isn’t a “Torrenting” category of traffic that popped up and I don’t know if that exists. But given the overall low data usage, it doesn’t seem that way.

Is this a crime?

I do want to point out that what “Emily” did is highly illegal. They hacked/cracked their way into my network. Every state in the US has laws on the books about this, as does the federal government, I’m sure. But not only did they engage in unauthorized access to a network, they also used my Internet connection, that I pay for. That’s theft of services. I didn’t authorize “Emily” and their family to be on my network, nor did I allow them to use my Internet connection.

“Unauthorized access” entails approaching, trespassing within, communicating with, storing data in, retrieving data from, or otherwise intercepting and changing computer resources without consent. These laws relate to these and other actions that interfere with computers, systems, programs or networks.

National Conference of State Legislatures – Computer Crime Statutes

But then who would I report this to? And who would investigate this? My city PD and the state have more important things to worry about. I think.

Speaking of worry, because of how they entered the network, by cracking into it, I’m still worried about my computers. While one of my cybersecurity buddies introduced me to SIEMs a few months ago and had me install Wazuh on my server, I only have it monitoring one computer, the one I use the most. I have other computers that are on almost all the time that I don’t use as much. More importantly, I’m not anywhere near proficient enough to be able to analyze all the logs that Wazuh is collecting. As a result, I still need to figure out what I want or need to do about my computers. Were they able to gain access to them? Upload and run malicious code?

On the other hand, someone online that I talked to about this did mention that it was odd that they didn’t attempt to obfuscate their device names. So maybe they’re just a “script kiddie” that for some reason can’t or doesn’t want to pay for their own Internet like an adult.

Regardless, it still has me worried. And that’s the rub. Even though this was entirely digital, it feels the same as if someone physically broke in and entered my home without me knowing. And then stayed there, hidden away, in the attic.

As an introvert and a sort of hermit (Look, it’s hot and humid as hell out there!), my home is my sanctuary. I’m sure that’s true for everyone, introvert or not. But because I spend so much time on my computers for work and for leisure, that too is my “home,” as sad and ridiculous as that may sound. Same concept though; in this world we live in, our “digital life” through our devices and everything stored on them is important to each of us. Find me a person that would be OK with someone “rifling” through their cell phone. Or OK with someone posting even a silly status on their Facebook or Twitter behind their back. Some people don’t even want their family or significant others to look through their phone, much less a stranger. My privacy has been invaded and my feeling of safety, shattered. It sounds dramatic, but it’s true.

I still have work to clean this up. And I have some ideas. But I’ll get to that on my next post, Soon™, which will wrap this ‘incident’ up.

—To be continued.

Homelab Chronicles 07: ALERT! – Unauthorized Access

It’s been awhile since I’ve done anything with my Homelab. I’ve been busy with work, travel, and lounging around. There’ve even been extended periods over the last 2-3 months where my server has been completely turned off so I can save some money on electricity during the hot, hot summer. Plus, when it’s 100°F (37.7°C) outside and my AC is trying to keep things at a “cool” 78°F (25.6°C), the last thing I need is a server putting out even more heat.

But I was forced to take a look at things the other night when my Ubiquiti USG was making strange sounds. Fearing that it was going out, I wanted to look around to see how much a replacement would cost. I needed some information on my USG, so around midnight before going to bed, I booted up the server — it hosts an Ubuntu VM that itself hosts the Unifi Controller — and signed-in to the Unifi Controller via Web.

Almost immediately I was struck by how many clients were supposedly connected to the network: 34 devices.

Now, I’m a single guy with no kids, living in a 2-bedroom apartment. But I’m also an IT professional, a geek, and a gamer. I have several computers, cell phones, tablets, consoles, and such. I also have some smart home stuff like plugs, thermostat, cameras, etc. But the number of devices connected is pretty stable. Like 20-25.

So to see 34 clients was surprising.

I started with the list of wired connections. About 10 devices that I mostly recognized, even with just MAC addresses. Unifi has a neat feature where it’ll lookup MAC addresses to find manufacturer information. Anyway, all good there. So I went to the list of wireless connections.

At the very top of the list, I saw 10 devices that I didn’t recognize. One had a hostname of “Emilys-iPad.” I’m not an Emily. I don’t know an Emily. And I certainly don’t have an iPad named Emily…’s-iPad.

List of Devices in Unifi Console
Who the hell is Emily and why does she have an iPad on my network?

My heart started racing and I got jitters. Devices were on my WiFi network that were not mine. Devices that I didn’t authorize, by someone that I didn’t know. There were a couple Amazon devices, an LG device, and other hostnames I didn’t recognize. But I don’t have any Amazon devices, nor LG.

How long have these been on my network? Whose are these? But more importantly, how did they get on the network?

I didn’t spend much time answering those questions, as the situation needed to be dealt with. Instead of going to bed, I took a screenshot of the device list with hostnames and MAC addresses, and then immediately got to work.

To start, I disconnected and blocked all the devices from connecting to my WAP. I noticed that all the devices were connected to a secondary WLAN with a separate SSID; more on that in a second. I disabled and then deleted that WLAN. I then powercycled the USG and the Unifi WAP to make sure those devices were off the WLAN and wouldn’t be able to connect again. When it restarted, nothing was connected to that WLAN and only my devices were connected to the “main” WLAN. The threats were removed.

OK, so now about this WLAN. Some months ago, I whipped out my old Playstation Portable (PSP). I was feeling nostalgic and wanted to find some old games on the Playstation Store, so I needed to connect my PSP to the Internet. I have a modern WiFi 6 (802.11ax) Unifi AP. Unfortunately, the PSP, being so old, can only connect to 802.11b or 802.11g networks. I can’t remember the decision making process, but I eventually created a secondary WLAN, that was specifically for b/g devices. And of course I password protected it. However, since the PSP is old, I used the old-school WEP (Wireless Equivalent Privacy) as the password protocol.

Devices were on my WiFi network that were not mine. Devices that I didn’t authorize, by someone that I didn’t know.

Anyway, after I was finished with my PSP, I didn’t take the network down. “Never know when I might want to use it again,” I thought. So I left it up. Nothing was connected to it since. Since then, I’ve signed-in to the Unifi Controller a handful of times and never noticed anything other than my devices on my main WLAN. I honestly forgot that I even had it up. Until this happened.

With the threats neutralized, I could finally start doing some investigating. And my first question was obviously how they got on the network.

I’m assuming I password protected the WLAN. Because I’m not an idiot. Usually. But if it was only with WEP…well, there’s a reason why we’ve moved to WPA (WiFi Protected Access), WPA2, and WPA3.

According to Wikipedia, WEP was created in 1999. 23yrs ago. And over time, major vulnerabilities were found quickly. Without getting into the nitty-gritty, it’s not hard to crack a WEP password. There are programs out there online that are easy to find to sniff packets, analyze data, and eventually crack the password. Possibly in minutes.

That said…it’s not exactly something I’d expect my average neighbor to be doing. I’ve known about cracking WiFi passwords and “wardriving” for a long time. But even I’ve never done it.

I got a little nervous thinking about that. What kind of adversary is one of my neighbors? Are they also an IT person? Maybe a security professional?

And if they were on my network, what else did they see or even touch? In retrospect, it was dumb of me to do this, but I didn’t put that WLAN on a separate VLAN. I mean, why would I? I’m the only one connecting to it, with my one device. What that means is if anything connects to that b/g network, they’re on THE network. They can see my computers, my server, my consoles, my smart devices…everything.

Do I now have to wipe all my computers and VMs? I mean, some need it, but it’s still an undertaking to have to redo everything. It’d likely take a whole weekend and then some.

"Ain't Nobody Got Time For That"

That led me down another path, concerning my “main” WLAN. Did I use the same password for that b/g network, too? If so, they’d know the password to my main WLAN, as well, which has a different, but similarly-styled SSID.

So I nuked my main WLAN and created an entirely new one with a new SSID and new complex password. I then had to reconnect my smart home devices.

At that point, it was already around 2:00am, and I had to go into the office in the morning. What started as me wanting to find some model information on my USG turned into DEFCON1 at home.

But with the unauthorized devices off the network, a new WLAN, and the important devices back online, I felt somewhat comfortable going to bed. The investigation would have to wait until I got home the next day.

—To be continued.

Homelab Chronicles 06 – “Hey Google…” “I’m Sorry, Something Went Wrong”

I woke up early today, on a Saturday, to my alarm clock(s) going off. I was planning to go to a St. Patrick’s Day Parade and post-parade party with a friend. After turning off my phone alarm(s), I told my Google Nest Mini to stop the alarm that was blaring.

Unfortunately, it informed me that something went wrong. Though it did turn off. Usually when my Google Nest Mini has issues, it’s because WiFi messed up. So I stumbled out of bed, still half-asleep, to the guest bedroom, where the network “rack”—a small metal bookshelf—and the Unifi AP was at. My main 24-port switch had lights blinking. I looked up at the AP high up on the wall and saw the steady ring of blue light, indicating everything was working. OK, so not a WiFi problem, nor a network problem. Probably.

In the hallway, I passed by my Ecobee thermostat to turn the heat up a little and then noticed a “?” mark on the button for local weather. Ah, so I didn’t have Internet. Back in my room and I picked up my phone: 5G, instead of WiFi. On my computer, the Formula 1 livestream of the Bahrain track test, which I fell asleep to, had stopped. And reloading the page simply displayed a “No connection” error. I opened a command prompt and ran ipconfig /all and ping 8.8.8.8. The ping didn’t go anywhere, but I still had a proper internal IP in the subnet. Interesting. Guess the DHCP lease was still good.

Only one last place to check: the living room where the Google Fiber Jack and my Unifi Secure Gateway router were. Maybe there was a Fiber outage. Or maybe my cat had accidentally knocked the AC adapter off messing around in places he shouldn’t. Sunlight was streaming in from the balcony sliding door, making it hard to see the LED on the Jack. I covered the LED on the Fiber Jack with my hands as best as I could: it was blue. Which meant this wasn’t an outage. Uh oh. Only one other thing it could be.

Next to the Fiber Jack, surrounding my TV, I have some shelving with knickknacks and little bits of artwork. Hidden behind one art piece is my USG and an 8-port switch. I removed the art to see the devices. The switch was blinking normally. But on the USG, the console light was blinking with periodicity, while the WAN and LAN lights were out. Oh no, please don’t tell me the “magic smoke” escaped from the USG.

On closer inspection, it looked like the USG was trying to boot up repeatedly. It was even making a weird sound like a little yelp in time with the console LED going on and off. So I traced the power cable to the power strip and unplugged it, waited 15 seconds, and plugged it in again. Same thing happened. I really didn’t want to have to buy a new USG; they’re not terribly expensive, but they’re not inexpensive, either.

I tried plugging it into a different outlet on the power strip, but it kept quickly boot-looping. I then brought it to a different room and plugged it into a power outlet; no change. Great.

But then I noticed that there was a little green LED on the power brick. And it was flashing at the same frequency as the USG’s console light when plugged in. Hmm, maybe the power adapter went bad. I could deal with that, provided I had a spare lying around.

The Unifi power brick said “12V, 1 amp” for the output. So I started looking around. On my rack, I had an external HDD that was cold. I looked at its AC adapter and saw “12V, 2 amps.” That was promising, but could I use a 2 amp power supply on a device that only wants 1 amp? I looked online, via my phone, and the Internet said, “Yes.” Perfect.

I swapped the AC adapter on the USG. The little barrel connector that goes into the USG seemed to fit, if not just a smidge loose. Then I plugged it back into the wall.

It turned on and stayed on! Ha!

I brought it back to the shelf and reconnected everything. It took about 5 minutes for it to fully boot up. Afterwards, I went back to my computer and waited for an Internet connection to come back, and it did.

All in all, it was a 15-20 minute troubleshooting adventure. Not what I preferred to do straight out of bed on a Saturday morning, but it got fixed. I already ordered a new AC adapter from Amazon that should arrive in a few days.

Afterwards, I got ready and went to the parade. A bit nippy at about 25°F (about -3°C), but at least it was bright and sunny with barely any wind. I went to the party and had a couple beers. It definitely made up for the morning IT sesh.

Homelab Chronicles 05: Dead Disk Drive

Sometime around the new year, one of the drives in my server appeared to have died. I had some issues with it in the past, but usually unseating and reseating it seemed to fix whatever problems it was presenting. But not this time.

My server has 7 500GB HDDs, set-up in a RAID 5 configuration. It gives me about 2.7TB of storage space. These are just consumer-level WD Blue 7200rpm drives. It’s a Homelab that’s mainly experimental; I’m not into spending big money on it. Not yet, anyway.

While I’ve since heard that RAID 5 isn’t great, I’m OK with this since this is just a Homelab. Anyway, in RAID5, one drive can die and the array will still function. Which is exactly what happened here.

However, I began tempting fate by not immediately swapping the failed drive. I didn’t have any spares at home, but more importantly, I was being cheap. So I let it run in a degraded state for a month or two months. This was very dangerous as I don’t backup the VMs or ESXi. I only backup my main Windows Server instance via Windows Server backup to an external HDD. Even then, I’ve committed the common cardinal sin of backups: I’ve yet to test a single WS backup. So using something like Veeam is probably worth looking into for backing up full VMs. And of course testing my Windows Server full bare-metal backups.

Luckily, fates were on my side and no other drive failures were reported. I finally got around to replacing the drive about month ago. Got my hands on a similar WD Blue 500GB drive; a used one at that. It was pretty straightforward. I swapped the drives, went into the RAID configuration in the system BIOS, designated the drive as part of the array, and then had it rebuild. I think it took at least 10hrs.

While it was rebuilding, everything else was down. All VMs were down, because ESXi was down. Thought it best to rebuild while nothing else was happening. Who knows how long it would’ve taken otherwise and if I’d run into other issues. I wonder how this is done on real-life production servers.

Afterwards, the RAID controller reported that everything was in tip-top shape.

But of course, I wanted more. More storage, that is. I ended up getting two of the 500GB WD Blue HDDs: one for the replacement and the other as an additional disk to the array.

Unfortunately, Dell does not make it easy to add additional drives to an existing array. I couldn’t do it directly on the RAID controller (pressing Ctrl+R during boot), nor in Dell’s GUI-based BIOS or Lifecycle Controller. IDRAC didn’t allow it either.

Looking around online, it seemed that the only way to do it would be via something called OpenManage, some kind of remote system controller from Dell. But I couldn’t get it to work no matter what I did. The instructions on what I needed to install, how to install it once I figured out what to install, nor how to actually use it once I determined how to install it, were poor. Thanks, Dell.

In the end, after spending at least a few hours researching and experimenting, it didn’t seem worth it for 500GB more of storage space. I did add the 8th drive in, but as a hot spare. I may even take it out and use it as cold spare.

But yeah, I can now say that I’ve dealt with a failed drive in a RAID configuration. Hopefully it never goes further than that.

Homelab Chronicles 04 – Power Outages and Conditional Forwarding on USG

I’m typing this from my new digs. “New” is relative; I’ve been here three months already, yet still living out of boxes to an extent. Though all the important stuff is up and running like my computers, the network, the TV, and my bed.

My network diagram needs to be re-done, as I’ve had to move switches and routers around to make the physical infrastructure work for this apartment. The Google Fiber jack is in the living room, but some computers and network equipment are in bedrooms. Logically speaking, however, the network is still the same.

Main difference is that I have more lengths of cable running along the carpet than I did before, so I’ve had to secure the Ethernet cables to the baseboards so that I or my cat don’t trip over it. It actually looks pretty good!

Cat 5e cable cleanly secured to baseboard

As part of getting a new place, I did some additional home automation upgrades. My electric company was offering free Smart Thermostats, so I took advantage. I also replaced and added additional TP-Link Kasa Smart Plugs to control lamps around my apartment.

However, a peculiar situation arose when the power went out briefly a couple times from a bad storm. After everything came back on and online, the smart plugs stopped working properly. Only a hard power cycle—literally unplugging and re-plugging in the smart plug—seemed to fix it.

I won’t go into the whole ordeal, but after asking around on reddit, someone suggested the solution possibly lay with DNS. Because of course it’s always DNS.

DC01, a VM on my server, is the primary DNS on the network. When the power goes out, DNS becomes unavailable. Everything loses power, of course. However, everything else comes back online faster, including my router, the AP, switches, computers, and the smart plugs. The server, on the other hand, takes several minutes to boot RAID, boot ESXi, and finally boot Windows Server and make the DNS available.

I’m assuming that when the network goes down, the computers maintain their DHCP lease information, including DNS settings. However, that didn’t seem to be the case with the smart plugs. They may keep their dynamic IPs, but DNS settings do not appear to stay. Not entirely sure what goes on.

So this was a perfect opportunity to attempt Conditional Forwarding on my Unifi Secure Gateway. Conditional forwarding, as the name suggests, allows for DNS requests to go to specific DNS servers depending on the request itself.

Why will this fix my problem? Because I have an AD domain on the network, which requires DNS. Some computers are on the domain, while other computers aren’t, along with all the IoT devices. But all use the same internal DNS servers, with the DNS settings being handed out via DHCP from the router.

I found some resources on how to do this and it’s relatively easy. I won’t go into the how-to, but I’ll share the guides:

The short of it is that I had to create a new JSON file called, config.gateway.json, with the following settings:

{
	"service": {
    	"dns": {
        	"forwarding": {
            	"options": [
                    "server=/home.jcphoenix.com/192.168.32.252",
                    "server=/home.jcphoenix.com/192.168.32.242",
                    "server=8.8.8.8",
                    "server=9.9.9.9"
                ]
            }
        }
    }
}

The first two options lines associate AD domain references with the internal DNS servers. The last two options denote that any other requests should go to Google Public DNS or Quad9, another public DNS.

After placing it on the Controller (“UniFi – Where is <unifi_base>? – Ubiquiti Support and Help Center“), and setting the “DHCP Name Server” in the Controller to “Auto,” I restarted the USG and tested it out.

Powershell Results of NSLookup

As you can see, when host name GRSRVDC01 was queried with nslookup, the result came back from the internal DNS server. Same with the FQDN of the AD domain. But when JCPhoenix.com—this website—was queried, it went outbound.

So mission success!

There were a couple other ways I could have fixed this. Buying a UPS for the server was probably the easiest. Which I still need to do. I also could have manually set DNS on domain computers, while letting the USG give out public DNS settings to the rest of the devices. But neither would have been as fun and also free.

I also don’t like using static network settings, aside from a device IP. Since this is an experimental homelab, some computers that are on the domain today might not be tomorrow and vice versa. I want systems to automatically receive necessary settings based on new or changing conditions or attributes.

The last thing I’ll mention is regarding uploading the config.gateway.json file. I host my Unifi Controller on an Ubuntu VM. So instead of using SSH to get in and upload the file, I simply dragged and dropped the file in to the correct folder. Unfortunately, finding the folder proved tougher than expected. Because the folder didn’t exist.

The trick to get the folder created was to go into the Controller UI, and upload an image of a floorplan. In the old UI, the path is:

Map > Floorplan (Topology dropdown top left) > Add New Floorplan > Choose Floorplan Image.

Any image will work, since the goal is simply to get the folder created. After that, the floorplan can be deleted, if desired.

That’s it for this round. I’m thinking that my next project will be to set up a VPN server to allow me to remote-in to the network when I’m away. Though we’ll see if I have the motivation anytime soon!

Homelab Chronicles 03 – I Need a UPS ASAP

The power went out recently in my neighborhood. Neighboring buildings were completely dark, as was mine. I was cooking dinner at the time, so not only was I hungry, but I was also in the dark.

And so was the server. Now I don’t host any crucial services on there. It’s a Homelab; it’s just for funsies. But I still need to get an uninterruptable power supply (UPS), at least to allow for graceful shutdown when these rare outages happen. Twice the power tried to come on minutes after the outage. That means power went out three times; two of those times, the server got power for just a moment before turning off again, since I have the machine set to automatically start after power failure. I don’t know what that does to a machine, but it can’t be good. Especially an old boy like mine.

That said, I don’t expect I’ll get a long-lasting UPS. The outage was long: 45 minutes. There’s no way I could keep a server going for that long on a UPS. At least one that I could afford. Plus, it’d be worthless to do so since everything else was unpowered: my computers, the router and switches, the fiber jack, etc. So I only need something that can last 10-15min. It’d also be nice if it the UPS had someway to trigger a shutdown of ESXi, but that might be asking too much.

I’ve researched this before, but I think I’ll get back on it. Maybe even a refurbished one is good enough.

On a side note, this will lead to my next task: setting up those Conditional DNS Forwarders I mentioned in my previous post. When the power did come back on, the router and Internet fiber jack came on quickly. But since DNS is on the server, and the server takes like 10 minutes total to boot, then for ESXi to boot, then the Window Server to boot, I didn’t have Internet during that time. First World Problem at home, sure, but in a business environment, that could be pretty annoying, especially if the issue is a server being down, while everything else is up.


Yes, that was my view above during the outage. Yes, those buildings had power, while I had none. I guess I live on the edge of a neighborhood grid. The buildings to the side and “behind” me had no power, while those in “front” of me did.

Honestly, it was kind of nice to sit in the darkness for 45min. I had my phone, so it wasn’t terrible. But I was still hungry.

Homelab Chronicles 02 – Admin Giveth and Taketh Away…the Domain Controller

One of my plans at work is to properly remove an older physical servers from the network. This server once functioned as the primary – and only – domain controller, DNS, fileserver, print server, VPN server, Exchange server, etc. It was replaced in 2018, but was never really offlined. It existed in limbo; sometimes on, sometimes off. During the pandemic, my “successor/predecessor” turned it back on so staff could VPN in to the office from home.

Long story short, it’s time to take it down. To start, I want to remove it’s DC role. But I’ve never done that before. I’ve added DCs, but never taken one out of the network. So that’s why I did this.

I started by creating a new Win2016 VM in ESXi. This would be my third Windows Server instance, and I named it appropriately: DC03.

I set a static IP and added the domain controller role to it via Server Manager. The installation went off without a hitch, so I completed the post-installation wizard and added it as a third domain controller. Again, no issues. In a command prompt, I used the command repadmin /replsummary to verify that links to the other two DCs were up and that replication was occurring. After that, I checked that DNS settings had replicated. All DNS entries were present, including the DNS Forwarders.

Wait, what?


In a moment of serendipity, I had a couple weeks prior created an impromptu experiment setup. I added DNS forwarders to DC01 after DC02 was added as a DC. I had seen guides and best practices saying that DNS settings either coming from a router via DHCP or statically put on a workstation shouldn’t mix internal and external servers. So DNS1 shouldn’t be an internal DNS server, while DNS2 points to a public DNS like Google’s 8.8.8.8. So that’s how I found out about DNS fowarders in Windows DNS mananger.

I expected the DNS forwarders to eventually replicate from DC01 to DC02, but they never did, even after multiple forced replications. At the time, I didn’t understand why that was the case. In the end, I manually added the forwarders to DC02.

And then a few days after that, I added another forwarder on DC01, but not to DC02. And of course, that last entry didn’t replicate, leaving a discrepancy.

Apparently, DNS forwarders are local only and they don’t replicate. Conditional forwarders will, but not full-on external forwarders. This has something to do with the fact that DCs in the real world may be in different geographical locations, with different ISPs, that require the use of separate external DNS forwarders at each location.

So imagine my surprise when DC03 automatically had the DNS forwarders that I had placed on DC01. But I quickly stumbled upon the answer:

By [adding DNS roles], the server automatically pulled the forwarders’ list from the original DNS servers, and it placed these settings in the new DNS server role. This behavior is by default and cannot be changed.

Self-Replicating DNS Forwarders Problems in Windows Server 2008/2012 | Petri IT Knowledgebase

That’s why DC03 had the DNS forwarders. When a new DC is added that has a DNS role, it will do a one-time pull from the other DNS server; in this case, my “main” DC. But after that, DC03’s forwarders will forever be local.

Case closed!


With the new DC03 in place, with its proper roles, I left it for 24hrs. Just to see if anything weird would happen.

And wouldn’t you know it, nothing weird happened. Sweet!

I ran nslookup on a few different computers on my network, including domain- and non-domain joined ones.

It looked like that on all the computers. All three DCs/DNSs were present.

After confirming that everything was OK, I started removing the newest DC from the environment. I attempted to remove the role via Server Manager, but was prompted to run dcpromo.exe first. Since it wasn’t the last DC, I made sure not to check the box asking if it was last DC in the domain. Once again, everything went smoothly.

To confirm that DC03 was no longer an actual DC, I did another nslookup on various computers. The IP address of DC03 was no longer showing. In addition, I checked DNS Manager on DC01 (and DC02) and saw that DC03 was no longer a nameserver. Though a static host (A) record was still present, as was a PTR in the reverse lookup zone; both expected results. I left the AD role on the server, but I could completely remove it if I wanted.

Pretty simple and straightforward.

This gave me the confidence to do this at work. Consequently, I removed the DC role from the old server last week with no issues whatsoever. No one even knows it happened. Which is all a sysadmin can ask for!

Homelab Chronicles 01 – The Beginning, Sorta

So this is a new thing I want to try. It’s been over a year since I’ve posted, so why not?

Over the last 12-18mo, I’ve had the opportunity to set up a Homelab. I worked at an MSP for almost a year and a half and got a bunch of old client equipment, including a couple Dell servers.

My lab isn’t really segregated from the main network, but that’s because of what I’m trying to do; I’ll explain soon. But before I get to that, here’s the main gear I’ve been playing with:

  • Ubiquiti Unifi Security Gateway (USG)
  • Cisco SG200-26 Managed Switch (24 port)
  • Ubiquiti U6-Lite AP
  • TP-Link TL-SG108E Managed Switch (8 port)
  • Dell PowerEdge T620

I also have a bunch of other gear, like a Dell PowerEdge R610 and another 16- or 24-port switch that are sitting around collecting dust. At one point, however, I was playing with Unraid on the R610. Also had a desktop PC that had pfSense or OPNsense functioning as my router/firewall, before getting the USG. I don’t know enough about firewalls to really use those though.

Anyway, here’s a crappy diagram of the network.

Things in red are the main devices. Not all devices shown; I think I have like 10 physical computers, though not all used regularly. And there a bunch of other WiFi and IoT devices. I included some of the extra devices like the PS4 and iPhone so it doesn’t look like I just have these extra network switches for no reason. I live in a 2-Bdr, 900 sq. ft. apartment, but the extra switches are so I don’t have 3+ cables running to a room that I’m tripping over (thank god for gaffing tape).

Initially, I was going to have a separate lab subnet and VLAN. And I started it that way. But I’m one of those that if I don’t have a real “goal,” it’s hard for me to just play around with things. I need an actual project to work on. It wasn’t enough to have a separate, clean sandbox. I wanted the sandbox that already had all the toys in it! So I’ve already redone the network environment once.

In the end, I decided that I’d create a Windows Active Directory Domain environment for home. I want to have a domain account that I use across my computers. Ideally, I’d have folder redirection, offline folders, and maybe even roaming profiles, so that any computer I use will have my files. The server(s) will also function as a fileserver, with network shares shared out to accounts via Group Policy.

On the network side, some of my goals are:

  • Stand up a VPN service, probably using WireGuard
  • Create a management VLAN and another for everything else
  • Set up conditional DNS forwarding
  • Replace the switches with Ubiquiti gear to really take advantage of the Unifi software

I could go on, but what I’m trying to emulate at home is a small business environment, from the bottom to the top, from the router all the way to the workstation. I work for a small biz, so this is the perfect place for me to mess around with and screw things up before I try on my employer’s live environment.

All in all, this is a great learning experience and I’m excited to share what I’m doing. Maybe this will help others who are trying to build their own Homelabs.

I know I’ll be screwing things up along the way – and I can’t wait to do so!

AAR: 8 Mar. 2020, Deklein — Whoring on the Penultimate GOTG Keepstar

Having already been on a few GOTG Keepstar killmails last month and having already hit my PVP kill requirements for March, I wasn’t really planning on getting on another. But when you’re up at 4:00 am, finishing up some 5+ hours of mining, a free opportunity to pad zKillboard isn’t a bad idea. Especially since it requires little thought from the sleep-deprived brain.

Our fleet to EU3Y-6 in Deklein – through two advantageously-spawned Thera wormhole connections – was quick as we flew up in a fast Jackdaw fleet. As expected, there were hundreds other players in system ready to whore on the kill, along with the main Titan damage dealer fleets. Our allies in NC. were there, along with the “bluetral”-for-this-eviction TEST, among others. Enemies were absent in major numbers since technically GOTG has disbanded. Who would show up to defend a structure of a dead coalition of alliances? No one would. The enemies barely defended them when they were still a semi-organized group.

There’s not much else to say, other than our Jackdaw fleet did get Doomsday’d by the Keepstar, though I think we only lost eight or so ships. I didn’t take any damage from it.

The following photos tell the story better.

The main Titan fleet doing the vast majority of damage.
The Titans started getting ballsy, knowing they couldn’t be killed.
Is this a Michael Bay movie?

Killmail of the Keepstar. And I believe there’s another one, the last Keepstar to destroy, in a few hours. We’ll see if I’m awake for that one.