Machine Learning Security and Event Sourcing for Databases

In times past, an application would make an update to a database, essentially overwriting the old data with new data. There wasn’t an actual time element to the update. The data would simply change. This approach to database management worked fine as long as the database was on a local system or even a network owned by an organization. However, as technology has progressed to use functionality like machine learning to perform analysis and microservices to make applications more scalable and reliable, the need for some method of reconstructing events has become more important.

To guarantee atomicity, consistency, isolation, and durability (ACID) in database transactions, products that rely on SQL Server use a transaction log to ensure data integrity. In the event of an error or outage, it’s possible to use the transaction log to rebuild pending database operations or roll them back as needed. It’s possible to recreate the data in the database, but the final result is still a static state. Transaction logs are a good start, but not all database management systems (DBMS) support them. In addition, transaction logs focus solely on the data and its management.

In a machine learning security environment, of the type described in Machine Learning Security Principles, this isn’t enough to perform analysis of sufficient depth to locate hacker activity patterns in many cases. The transaction logs would need to be somehow combined with other logs, such as those that track RESTful interaction with the associated application. The complexity of combining the various data sources would prove daunting to most professionals because of the need to perform data translations between logs. In addition, the process would prove time consuming enough that the result of any analysis wouldn’t be available in a timely manner (in time to stop the hacker).

Event sourcing, of the type that many professionals now advocate for microservice architectures, offers a better solution that it less prone to problems when it comes to security. In this case, instead of just tracking the data changes, the logs would reflect application state. By following the progression of past events, it’s possible to derive the current application state and its associated data. As mentioned in my book, hackers tend to follow patterns in application interaction and usage that fall outside the usual user patterns because the hacker is looking for a way into the application in order to carry out various tasks that likely have nothing to do with ordinary usage.

A critical difference between event sourcing and other transaction logging solutions is the event sourcing relies on its own journal, rather than using the DBMS transaction log, making it possible to provide additional security for this data and reducing the potential for hacker changes to cover up nefarious acts. There are most definitely tradeoffs between techniques such as Change Data Capture (CDC) and event sourcing that need to be considered, but from a security perspective, event sourcing is superior. As with anything, there are pros and cons to using event sourcing, the most important of which from a security perspective is that event sourcing is both harder to implement and more complex. Many developers cite the need to maintain two transaction logs as a major reason to avoid event sourcing. These issues mean that it’s important to test the solution fully before delivering it as a production system.

If you’re looking to create a machine learning-based security monitoring solution for your application that doesn’t require combining data from multiple sources to obtain a good security picture, then event sourcing is a good solution to your problem. It allows you to obtain a complete picture of the entire domain’s activities that helps you locate and understand hacker activity. Most importantly, because the data resides in a single dedicated log that’s easy to secure, the actual analysis process is less complex and you can produce a result in a timely manner. The tradeoff is that you’ll spend more time putting such a system together. Let me know your thoughts about event sourcing as part of security solution at [email protected].

An Update on the RunAs Command

This is an update of a post that originally appeared on 
May 14, 2014.

Recently I wrote the Simulating Users with the RunAs Command post that describes how to use the RunAs command to perform tasks that the user’s account can’t normally perform. (The basics of using the RunAs command appear in Windows Command-Line Administration Instant Reference.) A number of you have written to tell me that there is a problem with using the RunAs command with built-in commands—those that appear as part of CMD.EXE. For example, when you try the following command:

RunAs /User:Administrator "md \Temp"

you are asked for the Administrator password as normal. After you supply the password, you get two error messages:

RUNAS ERROR: Unable to run - md \Temp
2: The system cannot find the file specified.

In fact, you find that built-in commands as a whole won’t work as anticipated. One way to overcome this problem is to place the commands in a batch file and then run the batch file as an administrator. This solution works fine when you plan to execute the command regularly. However, it’s not optimal when you plan to execute the command just once or twice. In this case, you must execute a copy of the command processor and use it to execute the command as shown here:

RunAs /User:Administrator "cmd /c \"md \Temp""

This command looks pretty convoluted, but it’s straightforward if you take it apart a little at a time. At the heart of everything is the md \Temp part of the command. In order to make this a separate command, you must enclose it in double quotes. Remember to escape the double quote that appears within the command string by using a backslash (as in \").

To execute the command processor, you simply type cmd. However, you want the command processor to start, execute the command, and then terminate, so you also add the /c command line switch. The command processor string is also enclosed within double quotes to make it appear as a single command to RunAs.

Make sure you use forward slashes and backslashes as needed. Using the wrong slash will make the command fail.

The RunAs command can now proceed as you normally use it. In this case, the command only includes the username. You can also include the password, when necessary. Let me know if you find this workaround helpful at [email protected].

Simulating Users with the RunAs Command

This is an update of a post that originally appeared on April 26, 2011.

One of the problems with writing applications, administering any network, or understanding system issues is to ensure that you see things from the user’s perspective. It doesn’t matter what your forte might be (programmer, administrator, DBA, manager, or the like), getting the user view of things is essential or your efforts are doomed to failure. Of course, this means seeing what the user sees. Anyone can run an application at the administrator level with good success, but the user level is another story because the user might not have access to resources or rights to perform tasks correctly.

Most knowledgeable users know that you can simulate an administrator by right clicking the application and choosing Run As Administrator from the context menu. In fact, if you Shift+Right Click the application, you’ll see an entry for Run As A Different User on the context menu that allows you to start the application as any user on the system. However, the GUI has limitations, including an inability to use this approach for batch testing of an application. In addition, this approach uses the RunAs command defaults, such as loading the user’s profile, which could cause the application to react differently than it does on the user’s system because it can’t find the resources it needs on your system.

A more practical approach is to use the RunAs command directly to get the job done. You can see some basic coverage of this command on page 480 of Windows Command-Line Administration Instant Reference. To gain a basic appreciation of how the user views things, simply type RunAs /User:UserName Command and press Enter (where UserName is the user’s fully qualified logon name including domain and Command is the command you wish to test). For example, if you want to see how Notepad works for user John, you’d type RunAs /User:John Notepad and press Enter. At this point, the RunAs command will ask for the user’s password. You’ll need to ask the user to enter it for you, but at that point, you can work with the application precisely as the user works with it.

Note that I highly recommend that you create test user accounts with the rights that real users have, rather than use a real user’s account for testing. Otherwise, if something goes wrong (and it usually does), you’ve damaged a real user’s account. Make sure you follow all of the usual policies to create this test user account and that you have as many test user accounts as needed to meet your organization’s needs.

Of course, many commands require that you provide command line arguments. In order to use command line arguments, you must enclose the entire command in double quotes. For example, if you want to open a file named Output.TXT located in the C:\MyDocs folder using Notepad and see it in precisely the same way that the user sees it, you’d type RunAs /User:John “Notepad C:\MyDocs\Output.TXT” and press Enter.

In some cases, you need to test the application using the users credentials, but find that the user’s profile gets in the way. The user’s system probably isn’t set up the same as your system, so you need your profile so that the system can find things on your machine and not on the user’s machine. In this case, you add the /NoProfile command line switch to use your profile. It’s a good idea to try the command with the user’s profile first, just to get things as close as you can to what the user sees. The default is to load the user’s profile, so you don’t have to do anything special to obtain this effect.

An entire group of users might experience a problem with an application. In this case, you don’t necessarily want to test with a particular user’s account, but with a specific trust level. You can see the trust levels setup on your system by typing RunAs /ShowTrustLevels and pressing Enter. To run an application using a trust level, use the /TrustLevel command line switch. For example, to open Output.TXT as a basic user, you’d type RunAs /TrustLevel:0x20000 “Notepad C:\MyDocs\Output.TXT” and press Enter. The basic trust levels are:

  • 0x40000 – System
  • 0x30000 – Administrator
  • 0x20000 – Basic User
  • 0x10000 – Untrusted User

Many people are experiencing problems using the /ShowTrustLevels and /TrustLevel command line switches with newer versions of Windows. The consensus seems to be that Microsoft has changed things with the introduction of UAC and that you’ll need to work with the new Elevation Power Toys to get the job done. You may also want to review the article PowerToys running with administrator permissions because it provides some insights that may be helpful in this case as well. I’d be interested in hearing about people’s experiences. Contact me at [email protected].

Understanding the Maturing of the Command Line

A number of people have asked me why I’ve written several different command line reference books. The answer is that each book serves a different market. Serving reader needs is a quest of mine. As reader needs change, I also change my books to better meet those needs. The command line may seem static, but reader needs have changed over the years because of the way in which the command line is perceived and the new commands added to it.

The most popular of the set, Windows Command-Line Administration Instant Reference, provides the reader with quick access to the most commonly used commands. In addition, this book emphasize examples over documentation, so you see how to use a command, but don’t necessarily get every detail about it (only those that are used most often). This book is mainly designed to assist administrators. With this group in mind, the book also provides a good overview of batch files and scripting. The point is to provide something small that an administrator can easily carry around.

A second command line reference, Administering Windows Server 2008 Server Core, is designed to meet the needs of those who use Microsoft’s Spartan Server Core operating system. The book includes a number of special features for this audience, such as instructions on getting hard to install software to work in this environment. This is also the only book that discusses how to use Mono to overcome .NET Framework limitations in this environment. Even though the title specifies Windows Server 2008 Server Core, the book has also been tested with Windows Server 2012 Server Core. The point of this book is to allow you to get all of the speed, reliability, and security benefits of Server Core installations without all of the hassle that most administrators face.

My third command line reference, Windows Administration at the Command Line for Windows Vista, Windows 2003, Windows XP, and Windows 2000, serves the general needs of administrators and power users. This book is intended to help anyone use the command line more efficiently. It provides a little more hand holding and considerable more detail about all of the available commands than my other two books. This is also the only book that discusses PowerShell.

The PowerShell portion of this third book has received a lot more attention as of late. Microsoft is making a much stronger emphasis on this new version of the command line, so I’m glad I included it in my book. One of the strong suites of this book is that it not only discusses documented commands, but many undocumented commands as well (with the appropriate caveats, of course).

No matter which version of my command line reference you use, I’m always here to answer your questions about my books. How do you interact with the command line? Has PowerShell taken a more prominent role in the way you do your work? Let me know at [email protected].


Limitations of the FindStr Utility

Readers have noted that I use the FindStr utility quite often. This utility is documented in both Windows Command-Line Administration Instant Reference and Administering Windows Server 2008 Server Core (and also appears a host of my other books). At the time I wrote that documentation, I had no idea of how much comment this particular utility would generate. I’ve written a number of posts about it, including Accessing Sample Database Data (Part 3), Understanding Line-, Token-, and String-Based Command Line UtilitiesUsing the FindStr Utility to Locate Errors in Visual Studio, and Regular Expressions with FindStr. It might be possible that people think that this utility is infallible, but it most certainly has limits. Of course, the FindStr utility is line-based and I’ve already documented that limitation. However, it has other limitations as well.

The most important limitation you must consider is how FindStr works. This utility works with raw files. So, you can use it to look inside executable files and locate those produced by a specific organization as long as the file contains unencrypted data. When an executable relies on obfuscation or other techniques to render the content less viewable by competitors, the strings that you normally locate using FindStr might become mangled as well—making them invisible to the utility. In practice, this particular problem rarely happens, but you need to be aware that it does happen and very likely will happen when the executable file’s creator has something to hide (think virus).

Another problem is that FindStr can’t look inside archives or other processed data. For example, you can’t look inside a .ZIP file and hope to locate that missing file. You might initially think that there is a way around this problem by using the functionality provided in Windows 7 and newer versions of Windows to look inside archive files and treat them as folders. However, this functionality only exists within Windows Explorer. You can’t open a command prompt inside an archive file and use FindStr with it.

Recently, a reader had written me about his Office installation. Previously, he had used FindStr to locate specific files based on their content—sometimes using content that wasn’t searchable in other ways. This feature suddenly stopped working and the reader wondered why. It turns out that .DOC files are raw, while .DOCX files are archives. Change the extension of a .DOCX file to .ZIP and you’ll suddenly find that your ZIP file utilities work great with it. Old Office files work well with FindStr—new files only work if you save them in .DOC format.

Another reader wrote to ask about network drives. It seems that the reader was having a problem locating files on a network drive unless the drive was mapped. This actually isn’t a limitation, but you do have to think about what you want to do. Let’s say you’re looking for a series of .DOC files on the C drive (with a shared name of Drive C) of a server named WinServer in the WinWord folder that contain the word Java in them. The command would look like this: FindStr /m /s “Java” “\\WinServer\Drive C\WinWord\*.doc”. When using network drives, you must include the server name, the share name, the path, and the file specification as part of the command. Otherwise, FindStr won’t work. What I have found though is that FindStr works best with Windows servers. If you try to use it with another server type, you might experience problems because FindStr won’t know how to navigate the directory structure.

There is a real limit on the length of your search string. Another reader wrote with this immense search string and wondered why FindStr was complaining about it. The utility appears to have a search string length limit of 127 characters (found through experimentation and not documented—your experience may differ). The workaround is to find a shorter search string or to perform multiple searches (refining the search by creating a more detailed file specification). If you can’t use either workaround, then you need to write an application using something like VBScript to perform the task.

These are the questions that readers have asked most about. Of course, I want to hear your question about limitations as well. If you encounter some strange FindStr behavior that affects my book’s content in some way, please be sure to write at [email protected].


Creating Links Between File Extensions and Batch Files

A couple of weeks ago I wrote a post entitled, “Adding Batch Files to the Windows Explorer New Context Menu” that describes how to create an entry on the New context menu for batch files. It’s a helpful way to create new batch files when you work with them regularly, as I do. Readers of both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference need this sort of information to work with batch files effectively. It wasn’t long afterward that a reader asked me about creating links between file extensions and batch files. For example, it might be necessary to use a batch file to launch certain applications that require more information than double clicking can provide.

This example is going to be extremely simple, but the principle that it demonstrates will work for every sort of file extension that you can think about. Fortunately, you don’t even need to use the Registry Editor (RegEdit) to make this change as you did when modifying the New menu. The example uses this simple batch file named ViewIt.BAT.

@Echo Processing %1
@Type %1 | More

Notice that the batch file contains a %1 entry that accepts the filename and path the Windows sends to it. You only receive this single piece of information from Windows, but that should be enough for many situations. All you need to do then is create a reasonably smart batch file to perform whatever processing is necessary before interacting with the file. This batch file will interact with text (.TXT extension) files. However, the same steps work with any other file extension. In addition, this isn’t a one-time assignment—you can assign as many batch files as desired to a single file extension. Use these steps to make the assignment (I’m assuming you have already created the batch file).

  1. Right-click any text file in Windows Explorer and choose Open With from the context menu.
  2. Click Choose Default Program from the list of options. You see the Open With dialog box shown here.
  3. Clear the Always Use the Select Program to Open this Kind of File option.
  4. Click Browse. You see the Open With dialog box.
  5. Locate and highlight the batch file you want to use to interact with the text file (or any other file for that matter) and click Open. You see the batch file added to the Open With dialog box.
  6. Click OK. You see the batch file executed on the selected file as shown here.

At this point, you can right-click any file that has the appropriate extension and choose the batch file from the Open With menu. The batch file will receive the full path to the file as shown in this example. It can use the information as needed to configure the environment and perform other tasks, including user interaction. Let me know your thoughts on linking file extensions to batch files at [email protected].


Adding Batch Files to the Windows Explorer New Context Menu

Administrators are always looking for ways to perform tasks faster. Most administrators have little time to spare, so I don’t blame them for looking for new techniques. One of the ways in which administrators gain a little extra time is to automate tasks using batch files. Both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference provide significant information about creating and using batch files to make tasks simpler. However, a number of readers have asked how to make creating the batch files faster by adding batch files to the Windows Explorer New context menu. That’s the menu that appears when you right click in Windows Explorer. It contains items such as .TXT files by default, but not .BAT (batch) files.

Being able to right click anywhere you’re working and creating a batch file would be helpful. Actually, the technique in this post will work for any sort of file you want to add to that menu, not just batch files, but the steps are specific to batch files.


  1. Open the Registry editor by typing RegEdit in the Search Programs and Files field of the Start Menu and clicking on the RegEdit entry at the top of the list.
  2. Right click the HKEY_CLASSES_ROOT\.bat key and choose New | Key from the context menu. You’ll see a new key added to the left pane.
  3. Type ShellNew and press Enter.
  4. Right click the new ShellNew key and choose New | String Value from the context menu. You’ll see a new string value added to the right pane.
  5. Type NullFile and press Enter. Your Registry Editor display should look like the one shown here.

At this point, you should be able to access the new entry in Windows Explorer. Right click anywhere in Windows Explorer and choose the New context menu. You should see the Windows Batch File entry shown here:


Selecting this entry will create a blank batch file for you in the location you selected. All you need to do is open the file and begin editing it. What other sorts of time saving methods do you find helpful in working with batch files? Let me know at [email protected].


Exercise Care When Synching to External Time Sources

I read with interest an article by Mary Jo Foley recently entitled, “Microsoft offers guidance on Windows Server Year 2000 time-rollback issue.” It seems that the time source at USNO.NAVY.MIL experienced a problem and rolled back the clocks on a number of servers to the year 2000 during the evening of November 19th. I wouldn’t have wanted to be one of the administrators who had to fix that problem, especially if there were time-sensitive processes running at the time. Can you imagine the effect on applications such as billing? Of course, the effects are devastating for time-sensitive server features such as Active Directory.

If your organization has a single server that relies on a single time source for synching purposes, it probably isn’t possible to detect this sort of problem immediately, unless you have a human being observing the synching process. Given that administrators love automation, having someone physically sync the server won’t happen in most cases. However, good advice in this case is not to sync to the time server every day—sync only on days when someone will be there to monitor the servers. At least the administrator can quickly react to errant updates of the sort mentioned in the article.

Larger installations with multiple servers could possibly set up multiple time servers and use an application to monitor them. When the servers are out of sync, the application can notify the administrator about the issue. It’s also possible to use the W32Tm utility to perform time monitoring or to compare the time settings of two systems using a strip chart.

Actually, it’s a bad idea to sync to the time server at times when an administrator isn’t available to monitor the system, such as during the middle of the night or a holiday. The best option is to sync the server immediately before the staff arrives in the morning or immediately after they leave at night, when an administrator is available to quickly fix the problem. My personal preference is to include the W32Tm utility in a batch file that runs when I start my system in the morning. This batch file syncs all of the systems on the network at a time when I’m specifically watching to see the results. Both Administering Windows Server 2008 Server Core and Windows Command-Line Administration Instant Reference provide information on how to use this utility to perform a wide variety of time-related tasks.

If you happened to be affected by this issue, make sure you read the Microsoft blog post entitled, “Fixing When Your Domain Traveled Back In Time, the Great System Time Rollback to the Year 2000.” Even if you have already fixed the problem, the information in the article is useful because it helps define the problem and provides some useful information for avoiding the problem in the future. The vast majority of servers affected by this problem have Windows 2003 installed without time jump protection enabled. I’d actually like to hear if someone has encountered something odd in this particular circumstance so that I get a better feel how this problem manifested itself in the real world.

How do you work through time-related issues in your organization? Have you ever encountered a problem of this sort with your system? Let me know your thoughts at [email protected].


Talking Technical with Non-technical Audiences

Communication has always been key to any sort of technical activity, but the need to communicate efficiently is greater today than ever before. The fact that early developers were successful despite having limited communication skills is more due to the fact that early users were also technical (so they shared the same frame of reference), rather than the superiority of the application environment at the time. In fact, applications are a form of communication specific to computers, but until recently, most developers didn’t view them in that light.

The days of the solo developer working in a darkened room and subsisting on a diet of pizza and soda are gone. Applications today have to appeal to a broad range of people-most of whom have no technical skills and have no desire whatsoever to develop such skills. The complex application environment means that developers must possess the means to articulate abstract coding issues in a concrete and understandable manner to people who view their computers as appliances. In addition, developers now commonly work as part of a team that includes non-developer members such as graphics designers. In short, if you don’t know how to tell others about your ideas and the means you plan to use to implement them, your ideas are likely going to end up on the junk heap. That’s why I wrote, “10 Reasons Development Teams Don’t Communicate” for SmartBear Blog.

The problems that developers experience today have more to do with limited communication skills, than technical ability. It’s quite possible to write amazing applications without developing the skills to communicate the concepts and techniques demonstrated in the applications to others. In fact, the stereotype of the geek is funny, in part, because it has a basis in fact. Schools don’t spend much time teaching those with technical skills how to communicate effectively and the graduates often struggle to understand the basis for miscommunication, even amongst peers. Schools will eventually catch up and begin teaching developers (and other technical disciplines) strong communication skills, but in the meantime, developers and other members of the technical professions will need to rely on articles such as mine to obtain the information needed to communicate clearly.

A successful developer now needs to listen to others actively-to be able to repeat the goals others have for an application in terms that the listener understands. In addition, the developer needs to know how to communicate well in both written and oral forms. The transition between the abstract world of code and the concrete world of the typical user is something that a developer needs to practice because there are no books that adequately address the topic today. To keep costs to a minimum, developers must accomplish communication tasks within a limited time frame and without error. In short, there is a significant burden on the developer today to create an environment in which users, administrators, management, DevOps, and other interested parties can communicate both needs (required application features) and wants (nice-to-have application features) in a way that the developer can interpret and turn into a functioning application. Luckily, there are ways to make this a bit easier on the developer. For example, when it comes to DevOps: Agosto offers expertise to help you rapidly deliver what’s needed.

What sorts of communication issues have you faced as a developer or other technical specialist? Do you often find that people look at you quizzically and then proceed as if they understand (but you can tell they don’t)? Let me know your thoughts about communication issues at [email protected].


Saving Data to the Cloud

Cloud computing is here-no doubt about it. In fact, cloud computing offers the only viable way to perform certain tasks. For example, software such as Sage200 cloud assists businesses with their management, not just for accounting but for other cloud based needs too. Certainly, large organization can’t get by without using cloud computing to keep the disparate parts of their organization in communication. From the modular web services offered by Google Cloud to the CMMS Software provided by Axxerion, there really is a cloud-based solution for everything, making running a large business or organization easier. However, on a personal level, I’ve been unimpressed with saving data to the cloud for a number of reasons:

  • Someone could easily obtain access to confidential information.
  • The data is inaccessible if my Internet connection is down.
  • A cloud vendor can just as easily lose the data as I can.
  • The vendor doesn’t have a vested interest in protecting my data.
  • Just about anyone with the right connections could seize my data for just about any reason.

As a consequence, I’ve continued to back by system up to DVDs and store some of these DVDs off-site. It’s an imperfect solution and I’ve often considered using the cloud as a potential secondary backup. However, when I saw the news today about Megaupload and the fact that the data people have stored there is safe for possibly two more weeks, I started reconsidering any use of cloud backup.

Just look at what has happened. The federal government has seized data from the site and then shut it down, making the user’s data inaccessible to them. If someone who uses that service for backup is having a bad day with a downed system, it just got worse. Now their data has become inaccessible to them. There isn’t any means of recovering it until someone decides to make it accessible again.

If the data does become accessible again, the users have two weeks in which to download everything and find another place to store it. Losing the personal mementos is bad enough, but to lose confidential information on top of that (think accounting data) makes the loss terrifying indeed. There is also that federal possession of everyone’s data for use in court no less. Now everyone will potentially know everything that people have stored on Megauploadthe good, the bad, and the ugly.

Of course, everyone is talking about what this means, but personally, I go along with John Dvorak in thinking that this incident gives cloud storage the huge black that it rightfully deserves. These services promise much, but I can’t see how they can possibly deliver it all. Yes, there are advantages to using cloud backup, such as the benefits of off-site storage that is outside of your location so that if an extreme disaster strikes, you should theoretically have your data stored in a safe location. Of course, there is also the convenience factor, assuming that you have an Internet connection that’s fast enough to make such backup of an entire system practical.

Cloud computing is going to remain a part of the computing environment from now on, but I think cloud backup has a lot further to go before anyone should trust it as a primary means of data storage. What are your thoughts about cloud backup? Let me know at [email protected].