For the past three years, I have been working in the Cloud +AI division of Microsoft as a subject matter expert, advising teams on how best to fix accessibility bugs and promoting accessible design. I find the work very fulfilling and look forward to continuing to make an impact on disabled people’s ability to seek equal employment. My journey to this position has been long and interesting, and in this article, I’d like to share the road I traveled to get where I am today.
Who am I
First I’d like to talk about me and the part of Microsoft I work for. I have been interested in computers since I was seven years old and got my first Apple IIE. I still have fond memories of learning basic computing skills accompanied by the very melodic tones of the Echo II speech system and the TextTalker screen reader. Assistive technology has come so far since those days, and I grew with it from learning AppleSoft Basic to learning to write code in GW-Basic in MS-Dos using the Tinytalk screen reader. My first exposure to Windows and a graphical user interface was Windows 3.11 using Outspoken for Windows as a screen reader. I learned a lot about how Windows worked using that screen reader, and as technology progressed I moved onto Windows 9X, and then eventually 2000 and XP, then using Window-Eyes.
By this time I had decided to make computers my career, and at first looked into software development. After graduating college and trying a few internships, I decided for a variety of reasons to shift to systems administration. I studied both Unix/Linux and Windows system administration. I got my Microsoft Certified Systems Administrator 2003 certificate and landed my first job in 2006. I spent over ten years managing various networks for different companies, ranging from a small company with 80 employees and 9 servers in one office to a billion dollar company with 3,000 people and at least as many servers scattered around the world.
Employment speed bumps
Getting and maintaining a job when you have a disability is difficult on its own. There are many issues ranging from employers who don’t want to take the risk of hiring someone with a disability, to internal line of business applications that were just not designed to be accessible. Working in the IT field as a systems administrator and managing Windows machines presents some of its own unique challenges. From remotely accessing machines to installing operating systems in the days before the Windows install was self-voicing, there were several issues that constantly came up among blind administrators. Accessibility on the Unix/Linux side is relatively simple, as it is all command line based and with a good SSH client I was ready to go. In the end thanks to good knowledge of my assistive technology and some very understanding supervisors, even the Windows servers proved to be smoother than I thought. First, a lot of the admin tools you need to use can run on a client machine and connect to a server using a remote management protocol. When I started out most of this was done using the Remote Procedure Call (RPC) but a lot of tools have since moved on to much more modern protocols. This actually accounted for many day to day scenarios, but there was still sometimes a need to access the server desktop directly. I used the Window-Eyes screen reader, and it had recently introduced support for Microsoft’s remote desktop protocol. That way I could connect to the server using the RDP client and run a copy of Window-Eyes on the server and do my job. This is the surprisingly smooth part. I was told by instructors and rehabilitation professionals that getting a company to load a screen reader onto their servers would be hard. Fortunately, my supervisor at the time was very accommodating. We just needed to find a window when the environment could be brought down (this was a small company so that was relatively easy) and we installed the screen reader and I went to work. Throughout my years as a sysadmin, I at one point had Window-Eyes running on hundreds of servers around the world, and never ran into a problem. Even when connecting to servers with poor internet connections in remote countries such as South Africa, I noticed very little lag and was able to operate just as efficiently as a sighted administrator.
Supporting Challenges
Some interesting accessibility issues came up as I started through my first job. Some had easy solutions, some never got completely solved. Perhaps the easiest was how to support end users without seeing what was on their screens. Obviously if I was doing phone support, I wouldn’t be able to see the screen anyway. However, situations where I had to go to the user’s desk or work on their machine while they were not around were challenging because I could not easily install a full copy of a screen reader on their machine and then remove it. This was before Narrator or VoiceOver were really viable options. Window-Eyes had just introduced a special version of the product that could be installed and run from a thumb drive for this kind of situation, and that combined with a good USB headset (USB devices can run and generate sound even if the audio device in the computer is not working).
On the flip side, one of the most frustrating situations in the early days of my sysadmin career came up when I had to work with tech support on the phone. Picture the following. It is late at night, a system goes down and you need to call support. You have the number, the contract ID, all the important things you’ll need to open a case. You sit there listening to the wonderful hold music and the support technician comes on the line. You describe the problem, and they then want to connect to your computer so they can see your screen. We go through the process, they connect, and they can see the UI. They then say, “OK, just click on the blue icon on the left pane.” This conversation quickly devolves as more than likely the technician has never worked with a blind person before, and has no idea how to navigate the UI with a keyboard. While many technicians could adapt and work with me, I ran into quite a few I just could never make understand my needs. In many cases, I only had a limited amount of time to restore service before my job performance could be impacted, and explaining to the technician how to best work with a blind person on top of the existing pressure was very stressful to work through. Most of the time I just let the tech take control and hoped they knew what they were doing, and that my manager never found out. And, sometimes I had to get someone with working eyeballs to watch the tech do their thing, not always possible at 2:00 in the morning. All of those issues eventually had some kind of solution, and at the end of the day are things I can laugh about now. However, the final big story is the story that brought me to Microsoft, and that could have easily gone in a very bad direction.
The Road to Microsoft
Back in 2009, I went to training to learn a new piece of software called Microsoft System Center Configuration Manager (SCCM). If you have worked in a large organization over the past twenty years or so, this software was probably running on your work machine and was used to remotely install programs, inventory hardware and software, and even upgrade Windows remotely. It was a complicated program, but fortunately the user interface was accessible. I could go through the same training that any other SCCM administrator went through and could be expected to be as proficient with the application as anyone else. The application was written using the Microsoft Management Console (MMC), which was a very straightforward setup. There was a tree view on the left you used to select which area of the tool you wanted to work with, a list view on the right where you could select the various objects, and sometimes a view pane where you could see some status and select commonly used functions. The UI made heavy use of accessible context menus, and well laid out wizards with standard controls. At the time, the average base salary for an entry level SCCM administrator was about $70,000 a year, so with the right instruction, this was a career that a blind person could be successful at.
In mid 2010, everything started to change. Microsoft released the first public preview of the new version of SCCM. It had many amazing new features, all of which were going to knock SCCM admins’ socks off, and a new user interface that would make SCCM the easiest piece of software in its category to use… unless you were using a screen reader. Of course, as soon as I read about the new UI, I was anxious to try it. Who wouldn’t want their workflow getting even easier? I set up a test environment (a day long process, as I had to set up a separate test active directory forest to run the various servers) and with anticipation opened the new SCCM console for the first time. And encountered a bunch of unlabeled buttons and silent tab stops. Oops.
This first version of the UI was a case study in what not to do when building an accessible desktop application. Many buttons were unlabeled so all my screen reader said was “button,” the first version of the new treeview didn’t read any of the options in the tree, so I knew I was in a tree but didn’t know what I was selecting, there were controls that were not in the tab order, and while their function was keyboard accessible, there was no documentation as to just what the keyboard shortcuts were. Perhaps worse, the main screen had become so busy and there was so much in the tab order that there were now about 50 items I had to tab through just to get back to the beginning. The SCCM console had adopted the Office ribbon model and all ribbon items were in the tab order, and there was no way to exit the ribbon once you had entered it. In short, it was an unusable mess. It was just a preview, so I held out hope that it would be fine when the final release came out. To be on the safe side I took as many steps as I could to let Microsoft know about the problem: I commented on the user forums, contacted the Microsoft technical account manager for my company, and even traveled to the Microsoft Management Summit in 2011 and spoke with some people on the SCCM product team. As more previews came out, however, nothing changed. The app continued to be inaccessible, and I started to get very worried. I was the only SCCM administrator for my company, and at the time there was no push to upgrade to the new 2012 version, as we weren’t in need of any of the new features. My job was safe for the moment, but we wouldn’t be able to keep running an older version forever. While we had several years, the time to address the issue was now. I started researching next steps.
To make a long story short, many years passed with no action really happening. We continued to remain on the old version of SCCM and found workarounds to keep things running. Around 2014 it just wasn’t possible to keep the old system anymore. IT management wanted to introduce a new OS deployment system so that end users could reimage their own computers and still make sure company IT policies were enforced. To do this, we needed the latest version of SCCM, now at 2012R2. So, we had to upgrade. Though I had been maintaining communication with Microsoft, and had even done a demo of the current state of accessibility, I had heard of no plan on their side to fix things.
The Scripting Approach
I turned to GW-Micro, at the time the makers of Window-Eyes for a solution. Most major screen readers, including Window-Eyes, had a way to use scripts to make an inaccessible application more accessible. This had been done with large success in many other workplaces, so I was hopeful this would present a solution for me. We engaged a scripting professional at GW-Micro, gave them access to the environment, and turned them loose. They were able to make a big difference, but it was still very clunky to work with. For example, at the time the lists of objects were not always fully rendered to enhance performance, and only the visible objects were available. However, I was unable to navigate the lists as my screen reader would read nothing when I used the arrow keys, and no events were being fired that the screen reader could pick up on. So, the GW-Micro scripts had to iterate through the entire list and make their own accessible dialog that would display the list of items in an accessible control. Even then the lists only showed up to twenty items when there could sometimes be over 1000 items in total. Many times there were now multiple lists displayed on screen. For example, there was a list of computers that were the results of a query I had ran, and then another list showing the recent advertisements from SCCM that the computers had ran. So, we had to put logic into the script so I could select the right list to see data from and select what I wanted to change. There were also issues with the main tree view that let you select different areas of the product to work from. While this was more straightforward to script around, it took a big hit on productivity compared to the old version. Even with working scripts I had to press a command to run a script to iterate through the tree, collect each option and present it in an accessible tree. Even then the tree wasn’t fully complete, as when I opened a sub-branch, the script had to recalculate and look for what new items had appeared in the tree. This tree could often run three to four levels deep before you got to the option you wanted, where the old 2007 version of the tree only had two or sometimes three levels. The extra levels would have been fine had the original tree been accessible. Even when the scripts were working, the process was clumsy, and a task like setting up monthly server reboots would take me at least twice as long as a sighted peer.
At about this time, Microsoft had just introduced their Enterprise Disability Answer Desk (eDAD). I was one of the first people to reach out with the issues I was having, and it turned out the manager of the eDAD had actually worked with the SCCM team before as a program manager. She knew just who to talk to, and finally things started happening. We did another demo that was very widely attended, and even had regular syncs with the SCCM team. The then-head of the eDAD lived very close to where I did at the time, so she came out to do a hands on investigation of the problem and see just how bad it was. Finally things were improving on the Microsoft side, but it was still a slow road, and I was never able to completely move away from the GW-Micro provided scripts for as long as I was working at that job.
Microsoft Comes Calling
Sometimes an opportunity comes up that is just too good to pass up. A few days after my meeting, I was asked by the person from Microsoft to pass on my resume, and my journey at Microsoft began. After quite a number of interviews, I accepted a job working for Microsoft’s Cloud+Enterprise (now Cloud + AI) division, which at the time was where the SCCM product was managed. I would be working with over 200 teams, including the SCCM team, to help them understand the impact of an accessibility bug. This included empathy building with the product teams, as well as training them to find accessibility bugs themselves, using a screen reader, and the newly released Accessibility Insights tool. As is true with any big company, eventually the SCCM team was re-org’d into another division and I was not working with them anymore, but after my move to Microsoft I did work with them quite a bit for over a year and major improvements were made to the accessibility of the product overall. I now know from other blind system administrators that while there is still a ways to go, the usability of the product has increased dramatically and it is now possible to work competitively with sighted administrators again. I thank Microsoft’s new culture of accessibility for this, this is a real change that has come from the new top down emphasis on accessibility. I know there is still a long way to go, and the ever changing human computer interaction paradigms mean job security for my team and I for a long time to come. Building accessibility into a product from the design stage is the only way to ensure a great user experience from the beginning, but with the right people willing to listen, a lot of progress can be made with existing products as well.
Kenny Johnson says:
Hello, I am looking for an accessible ssh terminal for blind/low vision that runs on windows. In the article above you state that linux accessibility was easy for you… what ssh terminal is good for low vision or blind users? I have used putty for some time but now my vision is getting really bad (glaucoma) and I would like to find a terminal that is easier to use… especially for the future. Thanks.
Ryan Shugart says:
There are many good SSH clients for Windows. Putty has been known to work with JAWS and NVDA although I have heard some grumblings about the newer versions, and I know some people who use Secure CRT successfully with JAWS. If you need something basic, however, Windows 10 has an SSH client built right into it you can access by typing “ssh user@host” from the command prompt. Honestly if your needs are simple and you can work with what’s built in I’d stick with that. I used the Open SSH commandline client for years and it worked fine.