On Linux, there is the logrotate command, which will very nicely take a text file and, based on parameters, “rotate” the file. This is, obviously, very nice for breaking up log files into workable chunks and retaining some backlog of these logs for a set amount of time. Unfortunately, this sort of thing doesn’t exist natively for Windows.
Fortunately, someone made a powershell module that just… does this. Like, literally, just does this but in Windows. However, the documentation is kind of confusing because it kind of works like a powershell commandlet, but then relies on the documentation of the Linux command above, which is kind of confusing. So, let me throw down here what I’ve done to try and fill in some gaps for those of us having some trouble jumping that gap.
First thing’s first, you’ll need to install the module. Once done, you’ll be able to run log-rotate -config <Config File Location> -state <State File Location>. The config file location and state file location will point to a config and state file, which I’ve personally named logrotate.conf and logrotate.status. This is the only thing we need in Powershell.
Now the magic sauce: the config file. I’m going to put an example below, then we can talk some more about what’s going on in here.
The top section is settings that will be applied to all of the log files you are trying to rotate. Below that, you’ll see sections for whch individual files, folders, and wildcard-defined files and folders log-rotate will interact with when run. Log-rotate doesn’t run in the directory you’re running you shell in, it’ll run against these configured directories. Within those brackets, further settings can be set that will override the “defaults” set above. You can check the logrotate page for explanations and definitions of what these settings do, but I’ll call out a few things that needed some further explaining from the documentation.
olddir doesn’t need quotation marks for folders with spaces. It will use a relative path, so in the first case, C:\Logs\Job1Log.txt will be rotated to C:\Logs\Log Archive\Job1log.1.txt.
extension needs the period before the extension. Note here that I’ve defined this in the “defaults” section as .txt. Remember that every time log-rotate runs, it will check for files that match, and then rotate them to include .txt at the end. This is useful to make it easier to open rotated files with your default reader, but remember that log.txt will be rotated to log.txt and log.1.txt, which means the next time log-rotate runs, both log.txt and log.1.txt will get rotated, resulting in log.txt, log.1.txt, and log.1.1.txt which is not desired. I’ve specified an olddir folder to move older files into to help avoid this issue.
Finally, the state file can be basically left alone unless you need to reset things. It is just used to keep track of the state of your log files so that the next time log-rotate is run, it’ll know how old the files are, for example.
One of the worst things about Task Scheduler is not knowing when tasks fail. Well, as it it turns out, you can have Task Scheduler take actions, like run a powershell script to email you, when triggered by certain events in the event viewer. By default, you can get it to trigger on very generic things like Success and Failure based on Event ID. But what if you want to trigger based on Result Codes? That’s a bit tougher because that’s all in the XML view, and not part of what Event Viewer actually knows how to work with. Thankfully, you can build an XML query manually.
So! In my example use case, I am running a batch file. When it succeeds without issues, it’ll give event id 201 and result code 0, but when it fails, it could fail with a bunch of different codes, including 2147942401 and 2147942402. Well, I don’t really care how it failed, just that it did, so I want to be alerted when event id 201 has anything in the result code that isn’t 0.
Go ahead and make a New Task. Begin the Task “On an event”. Click the “Custom” radio button. Edit the Event Filter and go to the XML tab. Make sure the “Edit query manually” checkbox is checked. Then, include the following:
<QueryList>
<Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational">
<Select Path="Microsoft-Windows-TaskScheduler/Operational">*[System[(Level=1 or Level=2 or Level=3 or Level=4 or Level=0 or Level=5) and (EventID=201)]]</Select>
<Suppress Path="Microsoft-Windows-TaskScheduler/Operational">*[System[(Level=1 or Level=2 or Level=3 or Level=4 or Level=0 or Level=5) and (EventID=201)]] and *[EventData[Data[@Name='ResultCode'] and (Data="0")]]</Suppress>
</Query>
</QueryList>
Just points to the path in Event Viewer. You can cheat this by using the Filter tab to fill out everything it allows you to before switching to the XML tab and editing the query manually. You won’t be able to switch back and forth, though, once you’ve started editing.
<Select Path="Microsoft-Windows-TaskScheduler/Operational">*[System[(Level=1 or Level=2 or Level=3 or Level=4 or Level=0 or Level=5) and (EventID=201)]]</Select>
This says to get all events that have Event ID 201 and match all of those different alert levels (I just chose all of them). “But wait!” I hear you saying “I want to be more selective about the result code, don’t I?” Yup, which is why:
<Suppress Path="Microsoft-Windows-TaskScheduler/Operational">*[System[(Level=1 or Level=2 or Level=3 or Level=4 or Level=0 or Level=5) and (EventID=201)]] and *[EventData[Data[@Name='ResultCode'] and (Data="0")]]</Suppress>
This means “Don’t tell me about anything with Result Code 0.” Anything else means something happened and I need to go fix something.
So, “Tell me about every Event ID 201 event, unless it has Result Code 0.” Simple.
This is all taken from Moe Kinani here: https://cloudbymoe.com/f/windows-laps—power-app, and I will be referencing the instructions there throughout. However, the instructions there are a little bare for my smooth brain and there were a few hurdles I needed to get over, either because I didn’t understand the documentation properly, or because there have been changes to the platform since this was posted. I’ll also editorialize here a bit as is my wont. Make sure to follow along over there, though.
Azure VM
We’ll need to set up an Azure VM. I won’t do into much detail here because there’s a lot of choices that go into making that that are your own. This VM is only going to be used to run some Microsoft Graph Powershell commands, so it doesn’t need to be beefy. I also know in my environment that this doesn’t need 100% uptime. I can still get to the LAPS passwords in Azure AD or Intune, so I chose basically the cheapest option of VM that also lets Microsoft shutdown my VM at any given time if they need the resources. That’s a tradeoff I’m good with. I also have the VM to shutdown every night at 6 PM, because I know I won’t have any need for it after that point using the built in tools for the VM. I also have it automatically turning back on 5 AM, which uses a different set of tools that I will detail later.
To set up VM auto-start and auto-stop, there are instructions here to set it up: Azure VM Start/Stop V2. The long and the short of it, though, is to go to their GitHub page linked in the above instructions and install it in your Azure instance. Then, in Azure, look for “Logic Apps.” In there, you’ll see some new stuff that looks like this.
Here are my settings.
Registered Apps
This part was very straightforward and you can follow Moe’s instructions on how to set up the two registered apps you’ll need. One will be used to get information on user accounts, and the other will be used to get the LAPS password for the machine.
Azure Automation Account
Again, Moe’s instructions are largely fine. However, there’s an issue with his scripts that you download from his GitHub.
In the test script, in line #19, you need to convert the string to a secure string before Connect-MGGraph will use it. Replace line #19 with the following:
This also means that in his second script, you’ll need to replace line #25 with the above as well.
If you run either script, you’ll notice that when Connect-MgGraph -AccessToken $Token is run, you get a ton of stuff in your output.
All of this will mess with your PowerAutomate flow later. When Moe wrote his instructions, all that we received back was a simple “Welcome to Microsoft Graph” and so he has a compose step that gets rid of this as part of the flow. Since then, Microsoft has added more junk in here.Thankfully, we can edit line #28 and add -nowelcome to the end to get rid of all of this and the need for that compose transform.
The rest should work.
PowerAutomate
Go ahead and import WindowsLAPSStep1 from his GitHub. This can be done from the My Flows tab and choosing the Import dropdown and picking “Import Package (Legacy).” A note on this step. WindowsLAPSStep1 uses the legacy “PowerApps” app in its first step. You’ll see why this is an issue in the next step. At present it’s still working, but I imagine that at some point this will be deprecated, so keep this in mind if it’s not working.
This next step is where things start to really diverge from Moe’s instructions, and it’s entirely because the PowerApps app is in the process of being deprecated. You can’t pick it, only “PowerApps (V2),” which works differently but ultimately still gets the job done.
Start with PowerApps (V2) and add a Text input. In the first field, put in ThisItem.azureADDeviceId. Leave the rest of it default.
Next, add a Compose. Click the little lightning symbol, then choose ThisItem.azureADDeviceId. This brings that value into the flow in a way that we can use.
Next, add an Azure Automation Job
Here, you’ll specify details about the runbook you want the job to run. This is all the stuff we set up back in the “Azure Automation Account” step. The last field will use the output from the previous Compose step.
Next, we’ll create an Azure Automation Get Job Output action.
We’ll set that up to get the output of the job we created in the previous step.
Next, we’ll add another Compose action to parse the result back into a string.
Finally, we’ll add a “Respond to a Powerapp or flow” action. We’ll add a text input named “LocalPass” with a value that is the output from the previous step’s compose string.
PowerApp
Once again, Moe’s got a nice little package for WindowsLAPSStep1 that can be downloaded from his GitHub and can be imported into PowerApp. Something that confused me was that, once you upload the Zip file, you’ll need to do some things the on the resulting page before you can click Import. You’ll need to click the spots marked with arrows and perform the required actions.
You can follow the rest of Moe’s instructions from here, but I made some significant tweaks to his imported app that I’ll detail below. But, at this point, if you finish out Moe’s instructions, you should have a working app that will pull Windows LAPS passwords for computers in AzureAD/Intune. I have it published and shared with specific users, and have it published to Microsoft Teams. Users are then able to go to the Apps section of Teams and install the app both on their desktop client and on their phone apps.
Licensing
You will need some kind of licensing for this app, since it uses Premium sources (Azure AD). At present, you either get licensing for each individual user that will be using this app. Or, you can get Per App licenses, which will then let any user this app is shared with use the app as long as there is an available license. In other words, you can either get named licenses (license per user) or a concurrent license (the Per App licenses). In my case, we so very rarely get passwords that it’s unlikely we’ll need to lookup passwords simultaneously enough as to need anything except a Per App license so the entire helpdesk staff can use the app.
PowerApps Customization
Spinning Load Icon
Probably as a result of using such a low power VM, my search takes a really long time when you click “Get LAPS.” On average it’s about a minute of waiting. Not a major problem for my use case, but sometimes you start to wonder if it’s really working. While there is a very very subtle little thing going on at the top of the screen, it’s way too subtle. So instead, I went to Loading.io and grabbed a loading spinner I liked that was free. I saved it as an SVG and uploaded it to the Media tab of the app. I then placed it on the screen, along with two text boxes, one that says the app is running, and another that says “No really, it’s running” after 10 seconds.
And while I’m at it, I also added a spinning wheel when clicking “Find the Device.” It’s a pretty inconsequential amount of time, but it provides continuity and trust in the app seeing the same elements when waiting.
Now, before I move on, I want to note that I changed the names of a bunch of the elements on the screen from the defaults that Moe used. This was to help me to understand what was doing what. I’m not going to change my stuff back, so just know that you might need to go searching for the elements I’m referencing. I hope I gave them fairly obvious names.
Add the spinner, and the two text boxes onto your screen and position them where you’d like them. The key here will be using the “Visible” property of elements in the advanced tab. In the spinner’s visible property, set it to locShowSpinner. Set the visible property on each of the two text boxes to ShowTimer1 and ShowTimer2 respectively, based on which will show first before the other. These elements should now disappear.
Now add a Timer. Set it somewhere on the screen, but set the visible property to false. This can be set to true just for troubleshooting. In the timer’s advanced tab, set the following:
Now, on the “Find the Device” button element, in the advanced tab, put the following in the onSelect field:
// show the spinner UpdateContext({locShowSpinner: true}); // Get Computer info ClearCollect(MK6,WindowsLAPSStep1.Run(TextInput_FQDN.Text)); // hide the spinner UpdateContext({locShowSpinner: false});
Then, on the “Get LAPS” button, put the following in the onSelect field:
// show the spinner
UpdateContext({locShowSpinner: true});
UpdateContext({ShowTimer1: true});
// start timer
UpdateContext({TimerGo: true});
// load data before going to next screen
Set(LABS_VAR,WindowsLAPSStep2.Run(ThisItem.azureADDeviceId).localpass);
// reset timer
UpdateContext({TimerGo:false});
Reset(Timer);
// hide the spinner
UpdateContext({locShowSpinner: false});
UpdateContext({ShowTimer1: false});
UpdateContext({ShowTimer2: false});
Finally, create a Text box that fills the entire screen is placed between all of these new elements and the rest of what Moe has. Name it something like “ClickShield” and in the visible field set it to locShowSpinner. This will prevent people from interacting with things while the app is searching.
Reset Button
Once you’ve searched for something, everything is left in the all of the fields where they were. This can make running another search confusing. So, I created a reset button at the bottom of the page.
Create a button, and in the onSelect field. put the following:
Then edit the Reset property of the ComboBox dropdown with all your user’s names with varReset. Now when you click that button, everything should clear and reset back to the defaults.
Grey Out “Find The Device” when FQDN Field is Empty
During normal workflow, you should be finding a user in your drop down, and then when you select them that should fill in the field next to it with their FQDN, which is what the app uses to find assigned machines. But what if somehow that field gets blanked out? When you click “Find The Device” you’ll get an error about the upstream server not responding, since you gave it a null value. To make sure this doesn’t happen and to streamline the user experience a bit, we can “disable” the “Find The Device” button until there’s a value in there. Find the “DisplayMode” property of the “Find The Device” button and add the following:
Remember how in the last post, I talked about the Source Interface Filter on FortiGate DNAT policies? And remember how I talked about how DNAT policies overrule static route policies? Well, if you ever find yourself with a guest network that needs to be able to talk to the DMZ, make sure to add the guest network to your Source Interface Filter. Then, when a system on the guest network tries to get to the IP of something in your DMZ that has an associated DNAT policy, this will route the traffic correctly. I guess this is basically a hairpin NAT?
If you have Destination NAT (DNAT) set up in your Fortigate, you may have noticed this button:
It took me a long while to figure out what this button means. We have two internet uplinks that, at time of writing this, are not aggregated in a redundant link or an SD-WAN link of any kind. We have two static routes set up, and if one ISP is down, we disable one of the routes and the one with lower priority becomes active. This is an unideal setup that I’ll be fixing in a future post, but for now it makes for an interesting situation that highlights something about how the DNAT works on the FortiGate.
I noticed recently that the systems that have a local IP that has been set up with DNAT were not able to communicate out to the internet. Traffic initiated from outside coming in could complete all of their necessary traffic, but I couldn’t browse out to the internet. In our case, these were Citrix controllers, so it took me a long time to notice because nobody was using those systems to browse the internet.
When I finally dug into what was going on, I realized just how important that little button up top is. Let’s say we have two static routes set up:
Order
Destination IP
Gateway
1
0.0.0.0/0
50.100.123.123
2
0.0.0.0/0
200.100.123.123
We also have two DNATs set up:
Order
Details
Interface
1
200.100.123.124 –> 192.168.1.10
Port 2
2
50.100.123.124 –> 192.168.1.10
Port 1
In this case, when the system at 192.168.1.10 attempts to reach out to the internet, I would have expected the FortiGate to send that traffic along the static route set, in this case 50.100.123.123. Instead, however, the FortiGate checks the DNAT rules first, before the static routes, and matches the local IP with what’s on this table. It then sends it out of the assigned interface for that DNAT rule, using the public IP address set in the DNAT rule.
Obviously, this can create a lot of confusion because then, on the return trip. the FortiGate is getting a packet that doesn’t make sense and it doesn’t know how to route it back to the system, causing the system to be unable to communicate over TCP. So how to we fix this? Using that button. It’s fairly obvious in hindsight, but that button means that the DNAT rule only applies if the packet is being sent through the specified interface. By setting that filter to Port 2 in the first DNAT rule, I can be sure that traffic originating from 192.168.1.10, on my DMZ port (we’ll call it Port 3) will still follow the static route and the DNAT rule will only apply when the packet is coming from Port 2.
I hope this helps you figure out why, despite having static routes, firewall policies, and central NAT rules in place, nothing is working and your system can’t talk out to the internet. Maybe this was obvious, and maybe this is how it works on every other router, but this had me well and truly stumped for days.
We recently switched from Cisco products to Fortinet products for our network stack. We decided, perhaps unfortunately, to hook the FortiSwitch up to the FortiGate via the FortiLink (I’m not even kidding with this terminology, their branding is legit), but this made it difficult to configure the ports on the FortiSwitch with any granularity.
If you’re reading this and are thinking about doing this, I’d recommend against it. You wind up having to go through the FortiLink to perform any configuration on the FortiSwitch, and some configuration elements are not exposed in the FortiGate GUI, requiring you to go through the CLI to configure them. You might think that you can still get to the web interface for the FortiLink via direct IP, so it’ll be okay, but once you create the link with the FortiGate, changes made through the FortiSwitch’s FortiGUI (sorry, this one’s a joke) will not take effect over configurations made via the FortiLink.
If you really want to to this, though, make sure you have a really straightforward setup and desperately want everything in one single pane of glass, and that you’re comfortable doing work in the CLI.
All of that being said, the below is how you’ll get to the FortiSwitch’s port configurations via the CLI from the FortiGate (over the FortiLink).
If you’re setting up a certificate-based connection to Microsoft Graph Powershell (or whatever they’ve decided to name it at the point you’re reading this. You know what I’m talking about) and you’re getting an error when running:
don’t worry, that just means you need to run powershell as admin. You’re accessing the local machine’s cert store, so you need admin rights to do this.
In a previous post I talked about customizing the Azure AD synch rules to do some gymnastics with AD attributes getting imported into Azure AD. Recently I ran into a vendor who required that the email address’s capitalization match the capitalization in their SSO entries in order for the SSO to work. So if, on my side of things, I formatted people’s email addresses [email protected], but in the application I set someone’s email address to be [email protected], these two entries would not match and the SSO would not work.
I know. I’m flabbergasted as well.
To solve this, we resolved to always use lowercases for email addresses in both our AD and the application. But we’re human, people make mistakes, and more importantly people leave jobs with institutional knowledge like this and we may as try to make the computers do some of this work for us. As it turns out, the AD Sync synchronization rules editor has a function to convert strings to all uppercase or lowercase. We’ll use the previous post as a jumping off point.
Modified:
IIF(IsPresent([extensionAttribute1]),LCase([extensionAttribute1]), IIF(IsPresent([userPrincipalName]),[userPrincipalName], IIF(IsPresent([sAMAccountName]),([sAMAccountName]&"@"&%Domain.FQDN%),Error("AccountName is not present"))))
Wrapping [extensionAttribute1] with LCase() will force what’s in the user’s AD extensionAttribute1 attribute to be sent to Azure AD all lowercase. This makes sure that, at least from the IT side of things, we won’t have any problems if we accidentally set up [email protected]
I recently had a cronjob that I wanted to run on the first full Tuesday of the month. Well, crontab doesn’t handle this, obviously, but it got me thinking how to figure this one out. As it so happens, the first Tuesday of the month will always fall somewhere between the 2nd and the 8th.
I’m defining the first full week of the month as the first week where, starting Monday, every day that week is of the same month.
So, the earliest full week is one where the 1st falls on a Monday, so the 2nd falls on a Tuesday.
Following that logic, the latest full week would be one where the month starts on a Tuesday, meaning the first full week starts with the 7th on a Monday. That means the first Tuesday of a full week would be on the 8th.
So if we make a cronjob that runs every Tuesday and checks if the date is >= 2 and <= 8, we should always find the first Tuesday of the month that is part of a full week.
As is any SysAdmin’s wont in life, I’ve been messing around with OAuth2-Proxy and trying to add additional functionality beyond It-Finally-Works. If you haven’t already seen my previous post about setting up OAuth2-Proxy, please check it out since I’ll be working from that foundation.
Sign Out
While it might not have been the most important thing for a wiki page, it would be nice for my users to have the option to sign out if, say, they’re on a public computer (and responsible enough users to actually think of that, but that’s another story altogether).
This should be as simple as putting a “sign out” link for the users to click, but what URL do we use there. Well, there are two things we have to consider: the locally cached cookies, and the actual IDP session. If we do the first, but not the second, we’ll be taken to back to a login screen, but as soon as the IDP auth begins, your IDP will say “No need, you’re already logged in” and send you on your way without a username and password prompt. If you do the second, but not the first, then you won’t even get the sign in screen, because the cookies will still be cached. Even if you’re logged out from the IDP’s perspective, OAuth2-proxy still sees the cookies and will let you in without needing to check with your IDP.
OAuth2-Proxy’s documentation tell us we can use the following to clear cookies:
So we’ll need to combine these two into a URL. That monstrosity (including all of the HTML URL encoding necessary) should look something like this (assuming you’re using Azure AD as your IDP):
This URL will first tell OAuth2-Proxy to remove its cookies. then redirect (rd) to login.microsoft.com to log out of Azure AD, then tell Azure AD to re-route back to wiki.domain.com. From the user’s perspective, they’ll click on a sign out screen, choose a user account they’re logged in as to log out of, then get kicked to a couple of informational screens, then back to the sign in page.
There’s two more steps we need before we’re done. First, go into the Azure Portal, and go back to your registered app (Azure AD > App Registrations, and click your registered app). In the left-hand panel, go to “Authentication” and in the main panel, scroll down to “Front-channel logout URL.” Here, put in https://wiki.domain.com/oauth2/sign_out. I’m not entirely sure if this is correct, since in my testing I couldn’t quite get single sign-out to work right, but it couldn’t hurt.
Finally, and this is important, go into your config file and add whitelist_domains = "login.microsoft.com" (or whatever domain your IDP uses). Without this, OAuth2-Proxy won’t redirect to your IDP.