SASQAG Logo

Seattle Area Software Quality Assurance Group


Join the SASQAG group on Linked in Join the SASQAG group on Linked In 

Past Meetings 2011

     2015 · 2014 · 2013 · 2012 · 2011 · 2010 · 2009 · 2008 · 2007 · 2006 · 2005 · 2004 · 2003 · 2002 · 2001 · 2000 · 1999 · 1998

Managing a collaborative multi-national team
in real-time using Agile/Lean/Scrum/Xp
  ...Building a 100 MPG road car in 3 months

January, 2011

Joe Justice shares how he ported software-team best practices back to their roots to compete for $10 million in the Progressive Insurance Automotive X Prize. Driven by a desire to optimize automotive performance while minimizing cost and environmental impact, Joe formed WIKISPEED, a small, volunteer-driven team. They are  manufacturing a revolutionary 100 mpg, gasoline powered, four-seat car with a target price of $17,995. Joe will walk through how they are accomplishing the seemingly impossible. Joe will explain Agile applied by using his experience in the Progressive Insurance Automotive X Prize as the example.

Slides are here.

Bonus: Here's a link to the first 18 minutes of the talk, and here's more information from TechFlash.
Joe Justice
wikispeed.com

Joe Justice is a Seattle-area lean-software consultant and entreprenerd, and a registered automotive manufacture since 2007. In 2010, Joe's X Prize team, WIKISPEED, tied for 10th place in the mainstream class of the Progressive Insurance Automotive X Prize, a $10 million challenge for 100+ MPGe automobiles. Joe has spoken on social web application development, project methodology, and agile best practices to audiences at Denver University, University of California Berkley, Google, The Bill and Melinda Gates Foundation, Rotary International, and others. Joe is currently on assignment at Microsoft and CEO of WIKISPEED.
Large Scale Integration Testing at Microsoft

February, 2011

Given the large number of components and dependencies, substantial time and resources are expended by Visual Studio development teams in integrating and validating their components into the Visual Studio product. In the past, the flow of code was often hampered by component teams not efficiently and effectively qualifying their code contributions into the main build. As a result, dependent component teams started stalling, as breaking changes proliferated with respect to product and test code – they paid a heavy price in terms of failure analysis. Those dependent teams also became wary of merging those unstable code contributions from main into their own code bases as part of their forward integrations (FIs). In short, the flow of code became unpredictable, resulting in a code base that was often in an unstable state.


In this presentation, Jean describes some of the challenges faced in reinvigorating the code flow while ensuring that code quality was not compromised. He examines some of the issues that resulted in stagnant code flow, discuss some of lessons gleaned from a test perspective and focus on the new strategies, processes and tools his team is developing to make integration testing more efficient and effective. The presentation uses examples and data from this “fearless FI” initiative to illustrate and emphasize key points.

Slides are here.

Jean Hartmann
Microsoft

Jean Hartmann is currently, a Principal Test Architect in Microsoft’s Developer Division with previous experience as Test Architect for Internet Explorer where his main responsibility is driving the concept of software quality throughout the product development lifecycle. Prior to Microsoft, Jean spent twelve years at Siemens Corporate Research as Manager for Software Quality. He earned a Ph.D. in Computer Science in 1993 while researching the topic of selective regression test strategies.
The Changing Software Testing EcoSystem and its Impact on the Product, the Market and the Economy

March, 2011

Software Testing as an ecosystem has undergone radical changes in the last decade, more specifically in the last 5 years or so. All of these changes have been creating very positive impacts on the overall product quality, the market and the economy, creating a WIN:WIN situation for the end customers as well as software product development companies. In this presentation, we will discuss those specific changes, the resultant impact and also some core futuristic changes / trends, we expect, at this point of time. itiative to illustrate and emphasize key points.

Slides are here.

Rajini Padmanaban
Director of Engagement, Global Testing Services, QA InfoTech

As Director of Engagement, Rajini leads the engagement and relationship management for some of QA InfoTech's largest and most strategic accounts. She is also involved in test evangelism and thought leadership activities, such as blogging on test trends, technologies and best practices; building the test brand for QA InfoTech in the U.S.; and generating ideas for service enhancements. Rajini has more than nine years of professional experience, primarily in the software quality assurance space. Over the years, as part of Polaris Software Labs and later at Disha Technologies, Aztecsoft and MindTree, she has been in various client-facing roles including project management, engagement management and QA pre-sales for leading ISVs such as Microsoft. Her primary areas of expertise are technical account mining and customer relationship management, both of which help her take existing strategic accounts to new heights.
Testing in Production:
Your Key to Engaging Customers

April, 2011

Seth Eliot will show you how to use Testing in Production (TiP) to align your software development to your customers' needs and discover those unarticulated needs that drive emotional attachment and market share. Seth will demonstrate the tools you can use to TiP and get direct, actionable feedback from actual users. Feature lists do not drive customer attachment, meeting key needs does. Seth maintains that getting prototypes and product in front of real users is crucial to uncover features that meet these key needs and quantify how much of an impact they will have. Understanding this impact is important since evidence shows that more than half of the ideas that we think will improve the user experience actually fail to do so—and some actually make it worse. Techniques like Online Experimentation and Exposure Control enable you to find what works and what doesn't. Production however can be a dangerous place to test, so these techniques must also limit any potential negative impact on users. Seth shows several examples from software leaders like Microsoft, Amazon.com, and Google to show how Testing in Production with real users will enable you to realize better software quality.

 

Slides are here.

Seth Eliot
Microsoft

Seth Eliot is Senior Test Lead at Microsoft where his team solves Exabyte storage challenges for Bing. Previously he was Test Manager for the Microsoft Experimentation Platform (http://exp-platform.com) which enables developers to innovate by testing new ideas quickly with real users “in production”. Testing in Production (TiP), software processes, cloud computing, and other topics are ruminated upon at Seth's blog at http://blogs.msdn.com/b/seliot/. Prior to Microsoft, Seth applied his experience at delivering high quality software services at Amazon.com where he led the Digital QA team to release Amazon MP3 download, Amazon Video on Demand Streaming, and support systems for Kindle.
Ending WTF In Our Lifetime: Improving Communication between Dev and Test with Kanban

May, 2011

The WTF affliction is evidenced by frustration. Testers get frustrated when they get untestable or undescribed code. Coders get frustrated when testers show up asking apparently obvious questions. Everyone gets frustrated when features show up over and over again to test, only to get sent back with more defects or worse - the same defects as before. In the end, everyone stands around banging their heads against their cubes screaming "WTF, why don't they get it?!" Kanban and Personal Kanban can help end WTF in our lifetime. With them, we can visualize work, exchange information, and improve relationships. Join Jim Benson from Modus Cooperandi and Dawn Hemminger from Attachmate as they describe the basics of kanban, why it works, and how major companies are using it right now to end WTF.
Jim Benson is CEO of Modus Cooperandi, a collaborative management consultancy in Seattle, Washington. After being steeped in Agile for many years, Jim started working with kanban and Lean thinking in 2005. In 2008, he started taking this idea further with Personal Kanban, which brings flow based work to the individual and team. Since then he has been integrating Agile and Lean into his work with his own software company, as well as clients like the United Nations, British Telecom, NBC Universal, and the World Bank. Jim started Seattle Lean Coffee in 2009 to form a community of practice around Lean thinking. Lean Coffees have now spread worldwide including Los Angeles, San Francisco, Toronto, Stockholm and Sydney. He has recently co-authoredPersonal Kanban: Mapping Work | Navigating Life with Tonianne DeMaria Barry

Dawn Hemminger is a Software Test Lead at The Attachmate Group in Seattle, Washington. Her team is responsible for testing the Reflection X and Reflection X Advantage X Server products. By supporting 2 products with overlapping release schedules with their own unique demands for automation test suite development, test result analysis, manual testing and bug verifications, the team is tasked with ensuring they are always focusing on the most important tasks at the right time. By experimenting and adapting concepts and tools from scrum, agile and lean over the past 3 years, her team, today, is able to visualize their workflow, quickly adapt to schedule changes and continuously improve their processes. The successes from her team are now spreading throughout the company as additional Test and Development teams are adopting similar practices.
Better Test Design for Everyone

June, 2011

“Here – test this”. Those were the first words many of us heard when we began our careers as testers. Over time, we learned techniques, approaches and ideas that helped us find bugs and gather meaningful information about the state of the software we test. Our approach to test design grows through our experiences and approaches - but how do we know if our test design is good enough? Do we know when we test too much, or not enough – or which of our tests provide the most valuable information? How can we tell the difference between a “good” test and a “bad” test? Is a good test always good? Is a bad test always bad? A good test may find a bug or reveal new information – but is a test that never finds a bug a bad test? Test design is simple in theory (just come up with some test ideas), yet enormously complex in practice (what are the best set of tests for this product given our team, skills, market, schedule, and other context?). Good test design requires a portfolio of testing ideas and the knowledge to use the right ideas in the right places. Join us at SASQAG as we discuss how to build a test design portfolio and design valuable tests for any product.

Slides are here.
Alan Page began his career as a tester in 1993. He joined Microsoft in 1995, and is currently a Principal SDET on the Office Lync team. In his career at Microsoft, Alan has worked on various versions of Windows, Internet Explorer, and Windows CE, and has functioned as Microsoft’s Director of Test Excellence. Alan writes about testing on his blog, and is the lead author on How We Test Software at Microsoft (Microsoft Press, 2008), and contributed a chapter to Beautiful Testing (O’Reilly, 2009).
Dealing with large data sets and complex algorithms in your testing

July, 2011

As applications gets more complex with large data sets, sometimes testing the application can be more challenging than development work. How do we, testers, deal with the situations where selecting test inputs or equivalent partitioning is difficult due to a large data set? What kind of a test oracle do we use when the application logic or algorithm is quite complex? For example, how would you test Google's or Bing's search results? There are a nearly infinite number of inputs one can use for your testing. Additionally, you need to verify the algorithm that used for placement of the results (which one goes to first page and the order of the results in the page). Jae-Jin's test team is dealing with issues similar to this, and he will share his experiences and gather feedback from the audience.

Slides
Jae-Jin Lee has been working in software testing field 4 years. Currently, he is working as an SDET at Expedia's Revenue Optimization Engineering team, and previously worked at Livemocha and Attachmate as software testing engineer. Besides work at Expedia, Jae-Jin enjoys helping college students know more about software testing, and has given a few presentation at Pacific Lutheran University. ("Testability in your code", "Skills needed to be a good software test engineer") Jae-Jin also enjoys developing Math and Science online learning tools, and recently, he started a non-profit educational site (www.perphy.com) to help students to learn Math and Science.
The "Two Engine Model" for Software Quality Measurement and Improvement:
A lively romp through process capability, documentation, and measurement without getting your feet wet.


August, 2011

Many standards, initiatives, and measurement programs rely on the identification and documentation of work processes with the goal of improving quality, cost, schedule performance, productivity, and etc. The problem has been how much of a process needs to be documented to make it a capable and managed process. If we improved it, how would we know? Why does there seem to be a disconnect between process improvement initiatives, quality assurance, and the real work of getting software out the door? This presentation gives you a set of tools that can be used at a project, program, organizational, and/or enterprise level to incrementally define and measure processes to identify and eliminate waste and rework.

Slides
Tom Gilchrist
Associate Technical Fellow, CAS, Boeing

Tom has worked at Boeing for the last 27 years as a senior software engineer, and is currently an Associate Technical Fellow in the field of software quality assurance for Boeing Commercial Aviation Services (CAS). Before his work at Boeing, he worked as the principal in a number of software development startup companies and has worked as a software development consultant. Tom is a member of the American Society for Quality (ASQ), and serves as the ASQ software division's Region 6 counselor. Tom is currently involved in the University of Washington's Extension Software Testing Certificate program both as an instructor and as a member of the advisory board. He also currently serves on the board of the Seattle Area Software Quality Assurance Group (SASQAG.org).
Application Monitor: Putting games into Crowdsourced Testing

September, 2011

Software test teams around the world are grappling with the problem of testing increasingly complex software with smaller budgets and tighter deadlines. In this tough environment, “crowd-sourced testing” can (and does) play a critical role in the overall test mission of delivering a quality product to the customer. On the Microsoft Lync team, we firmly believe in using the “crowd of testers” to help us reach high levels of test and scenario coverage. To achieve this goal, we use an internal dog-fooding program where employees volunteer to use pre-release versions of our products and give us their feedback on a regular basis. However, for a volunteer based crowd-sourced testing effort to be really effective, one needs the ability to direct the “crowd” to exercise certain scenarios more than others and the ability to adjust this mix on demand.

What if one could devise a mechanism that provides the right incentives for the “crowd” to adopt the desired behaviors in near real-time? This paper describes how we conceived of, designed and implemented ApplicationMonitor, a tool that runs on a user’s machine and allows us to detect usage patterns of Lync in near real-time. The paper then describes a simple game we incorporated into the tool with the goal of making it fun for the “crowd”. The game also provided us with the ability to direct their efforts to test high-risk features, by appropriately changing incentives. One learning point was that even “gaming the system” behaviors in the “crowd” served the ultimate purpose, which was to increase testing of specific Lync scenarios. Vivek will discuss this and other takeaways as well as plans to improve the tool and the game over the next year.

Slides are here.
Vivek Venkatachalam
Microsoft


Vivek Venkatachalam is a software test lead on the Microsoft Lync team. He joined Microsoft in 2003 and worked on the Messenger Server test team before moving to the Lync client team in 2007. He is passionate about working on innovative techniques to tackle software testing problems
Pushing the Boundaries of User Experience Test Automation

User Experience (UX) Testing is often limited to manual, interactive testing which takes significant time and can be expensive. Over the last few years we have been finding ways where automated tests can help save time and resources. We are finding new ways to automate various aspects of the UX testing as reasonably possible. These automated tests have found numerous problems, and even work for highly-complex web applications.

Julian will share a practical experience report of their successes together with the barriers and limitations they discovered—detecting navigation issues, layout bugs, and problematic differences between the behaviour of various web browsers. He will also cover some of the risks of relying purely on automated tests and how, paradoxically, they may increase the chances of missing critical problems if you chose to rely mainly or even solely on the automated tests.

Discover when UX test automation is appropriate and how to combine it with other forms of testing.

Slides are here.
Julian Harty
eBay


Julian is currently a Tester At Large @ eBay where he is undertaking various missions to help improve Software Quality. These missions include improving the relevant and value of the testing practices, test automation, etc. He’s driven to find ways to improve software and computer systems that adapt to the needs of users, rather than users having to cope with poor technology. He also shares material publicly to enable others to do likewise. You can find his work online at various sites including: http://blog.bettersoftwaretesting.com and http://code.google.com/u/julianharty/ He has a BSc in Computer Science, was on the ISEB exam panel for software testing, has spoken at several hundred conferences internationally, etc.
The "Two Engine Model" for Software Quality Measurement and Improvement: A lively romp through process capability, documentation, and measurement without getting your feet wet.
Part 2.


Tom never got a chance to finish this presentation, so we're giving him a second (and last) chance.

Many standards, initiatives, and measurement programs rely on the identification and documentation of work processes with the goal of improving quality, cost, schedule performance, productivity, and etc. The problem has been how much of a process needs to be documented to make it a capable and managed process. If we improved it, how would we know? Why does there seem to be a disconnect between process improvement initiatives, quality assurance, and the real work of getting software out the door? This presentation gives you a set of tools that can be used at a project, program, organizational, and/or enterprise level to incrementally define and measure processes to identify and eliminate waste and rework.

Tom Gilchrist
Boeing


Tom has worked at Boeing for the last 27 years as a senior software engineer, and is currently an Associate Technical Fellow in the field of software quality assurance for Boeing Commercial Aviation Services (CAS). Before his work at Boeing, he worked as the principal in a number of software development startup companies and has worked as a software development consultant. Tom is a member of the American Society for Quality (ASQ), and serves as the ASQ software division's Region 6 counselor. Tom is currently involved in the University of Washington's Extension Software Testing Certificate program both as an instructor and as a member of the advisory board. He also currently serves on the board of the Seattle Area Software Quality Assurance Group (SASQAG.org).

Email questions about SASQAG or this web site to: webmaster at sasqag.org

Mailing Address:
Seattle Area Software Quality Assurance Group (SASQAG)
14201 SE Petrovitsky Rd
Suite A3-223
Renton, WA 98058