Monday, April 6, 2020

Software Development Best Practices


S - Single Responsibility Principle

An element of code (such as module, class or function) should do one primary thing and nothing else. That is, do not build Swiss army knife code elements. For example, a function that changes the case of a string should not trim it as well.

O - Open/Closed Principle

Do not alter the foundational code (such as base classes and common functions) but rather compose or extend them. Modules, classes and functions should be open for extension but closed for modification. The initial functionality should not be changed but may be extended via composition or inheritance.

L - Liskov Substitution Principle

Program to interfaces. Your business logic should be programmed against class interfaces (not concrete implementation of classes). A subclass should have "is a" relationship with the base class. Wherever you can use a base class, you should be able to use a derived class.

I - Interface Segregation Principle

Interfaces should be minimal and tailored to the function they need to perform. Minimize the number of methods in a given interface. Instead, expose multiple smaller interfaces. Clients can choose to implement just the interfaces they require.

D - Dependency Inversion Principle

Do not use concrete implementations directly. High level modules should not depend on lower level modules. Constructors for higher level modules should accept abstractions (interfaces) of lower level modules. At runtime, Dependency Container injects a concrete instance of the abstraction into the constructor.


Similar to Single Responsibility Principle in SOLID but applied at a macro level (Systems, Modules, Sub-Systems). For example, a payment processing module should only perform payment processing and nothing else (i.e. it should not calculate invoices or bills).


Do not implement the same logic twice. Put it in a common module, library, class or function and use it everywhere.

Factor out common code into helper classes, methods and components. This makes the testing effort more concise and helps track down bugs to a single place in code instead of several repeating code sections.


Minimize the amount of code you have to write to solve a problem. The less code there is, the lesser the chance of buggy code. Code Minimization goes beyond DRY in that you may opt to look at alternate design approaches that minimize the amount of code you have to write to begin with. Remember code is a liability! The less code you have to solve the problem, the better. Examples:

  • Use a well established library or software package to solve the problem instead of writing code to do the same.
  • Use declarative code instead of imperative code. For example in .Net, instead of using for loops use LINQ.
  • Instead of using switch/case statements to select a value or action use a table driven approach to look up a value or execute an action (a database table or an in-memory data structure should do the trick).


Do not implement code that is not immediately needed now even if you think you might need it in the future (you probably won't or if you do, you will need something entirely different).

Don't anticipate features and try to adapt your design or code to the features that are not necessary to complete the stories at hand. The chances are that either the requirements will change over time or the features you anticipate are never really needed. Do the simplest thing that will work.


When writing code check if there is a well-known design pattern that can be used to make implementation more robust and maintainable. Design Pattern Examples


Use polymorphism (abstract class + derived classes) to implement specific behaviors instead of conditionals sprinkled all over the code to handle differing behaviors. Use a single switch to instantiate the polymorphic derived class of the desired behavior and have the controlling logic invoke the behavior in a general fashion that is applicable to the entire family of the derived polymorphic classes.

Keep methods (functions) short and single-purposed. Complex methods should be no longer than 20 lines. Simpler methods should be no longer than 10 lines.

Do not pass too many parameters into a method. When several parameters must be passed into a method, create a request object that contains the parameters and pass the request object into the method.

Name the methods purposefully. The name should clearly reflect what the method does. A comment for the method should ideally be unnecessary. Write code that reads like prose. Comments should not be necessary when reading the code. Refactor logic into smaller methods with descriptive names. When reading the main method, the reader should be able deduce the purpose of the helper methods being called and understand what the main method is trying to accomplish without reading the implementation of the helper methods.

Do not enter who changed the code or the what the ticket or issue number was in code comments. Such information belongs in the source control check-in comments.


Use strongly type variables and constructs (generics, for example) to detect problems as early as possible (at compile time instead of runtime). Use enumerations instead of string values. Use named constants instead of string literals. Pay attention to compiler warnings.


Write testable code

Structure and implement your code so that it is easily testable. This mind set will naturally help you write code that is loosely coupled and minimizes inter-dependencies. The loose coupling is essential for testing a class or a component in isolation as is necessary when writing Unit Tests.

Write automated tests

By all means write automated Unit Tests. Better yet, take the Test Driven Development (TDD) route. That is, devise and write your tests before you start developing. TDD will greatly help in the design of the system in addition to the more obvious benefit of having a more complete set of automated tests. Having good automated unit tests allow us to refactor our code confidently.

Integrate often - Fail fast

Don't sit on pending changes. Check them in right away to find any incompatible changes. It is better to fail fast and correct the problem earlier in the cycle than to sit on a change and having to figure out why things don't work due to a change you made last week.


Groom (clean up) code without changing the underlying functionality often. Refactoring is essential for long term maintenance of the code and to facilitate future enhancements.

Over the time, the code will naturally decay. You will see violations of DRY, Minimize Code and even YAGNI principles. It is imperative to take time to periodically refactor the code to ensure its maintainability over time. Remove and discard dead code. Such code only adds noise to your code base. The lower the noise-to-signal (code) ratio, the better.

Tuesday, March 12, 2019

Async calls, why call ConfigureAwait(false)

Adding .ConfigureAwait(false) to your async call takes away the overhead of context switching of threads.

DoSomething();  // on thread 1

await DoMoreAsync().ConfigureAwait(false); // on thread 2

DoSomethingElse();  // continues the work on thread 2 (instead of having to switch to thread 1)

The default for ConfigureAwait is true. But ConfigureAwait(false) is the right thing to do, if  the code following the async all does not rely on the context of thread 1 (thread storage of thread 1, for example).

Async method signature example in C#


Task<GetDatabaseServersResponse> GetDatabaseServersAsync(GetDatabaseServersRequest request, CancellationToken cancellationToken = default(CancellationToken));


public async Task<GetDatabaseServersResponse> GetDatabaseServersAsync(GetDatabaseServersRequest request, CancellationToken cancellationToken = default(CancellationToken))
            var list = await _dbContext.DatabaseServer.ToListAsync(cancellationToken);
            return new GetDatabaseServersResponse(_mapper.Map<List<DatabaseServerRm>>(list), request.TraceId);

Executing an async method synchronously in C#

void NotAnAsyncMethod()
   var answer = someAsyncMethod().GetAwaiter().GetResult();
catch (Exception ex)
   // ex is our application's exception instead of aggregate exception
   //    aggregate exception is thrown if .Wait() and .Result are used instead

Sunday, February 17, 2019

Creating distinct pairs of values from database table


SELECT t1.val, t2.val
FROM tbl1 t1, tbl1 t2;


SELECT t1.val, t2.val
FROM tbl1 t1, tbl1 t2
WHERE t1.val <> t2.val;


SELECT t1.val, t2.val
FROM tbl1 t1, tbl1 t2
WHERE t1.val <> t2.val
AND t1.val < t2.val;


Saturday, February 2, 2019

Up and running with Angular 7 in 7 minutes

1. Install node js

Verify by checking versions as follows.

node -v
npm -v

2. Install the Angular CLI

npm install -g @angular/cli

Verify by checking version.

ng --version

3. Install Git

// Use without --global if setting these values on project by project basis
git config --global
git config --global

If a valid email and name is not displayed, set one up using:

git config --global ""
git config --global"Your Name"

4. Verify Sass is installed

Verify by checking version.

npm sass -v

If not installed:

npm install -g sass

5. Create an Angular app

Change to directory where you would like to create the application.

ng new helloang

Routing: Yes
Choose Sass

6. Serve the application

cd helloang

ng serve --open

If you get an error like the following:

ERROR in multi ./src/styles.sass
Module not found: Error: Can't resolve 'C:\Temp\helloang\src\styles.sass' in 'C:\Temp\helloang'

It is because projects created by Angular CLI: 7.3.0 references the wrong extension for styles.scss in angular.json.

Replace styles.sass with styles.scss in angular.json.

Friday, December 7, 2018

Takeaways and highlights from AWS re:Invent 2018

I had an opportunity to attend the seventh installment of AWS re:Invent. It was indeed a large gathering with more than 50,000 in attendance. Despite the size of crowds, the conference was very well run. I was not able to reserve a seat for all the workshops and sessions I was interested in advance but I was able to attend most of these workshops and sessions by queuing up in the walk-up line. Here are some of the takeaways and highlights.

Machine Learning was front and center

AWS provides ML capabilities at three levels of abstraction.
  1. Fully managed services such as AWS Rekognition, Polly, Amazon Comprehend (NLP), Alexa, etc. AWS introduced Textract, a smart OCR service.
  2. Managed execution of pre-built or custom ML models. SageMaker fills this role. 150+ machine learning algorithms are being made available in the AWS Marketplace.
  3. Infrastructure for running ML tools such as MXNet, PyTorch, TensorFlow, etc.
  • Several new sub-services for SageMaker were announced. For models that require a manual effort to train, AWS introduced SageMaker Ground Truth for classification of data via Mechanical Turk, or other sources. AWS also introduced SageMaker RL which performs training of models through rewards over time. To promote this service, AWS introduced DeepRacer, a fully autonomous 1/18th scale race car driven by reinforcement learning to help developers gain hands-on working knowledge of SageMaker RL. Amazon SageMaker Neo enables machine learning models to train once and run anywhere in the cloud and at the edge with optimal performance.
  • AWS also announced AWS Inferentia, a new inference chip (yes, a custom-built computer chip) that promises to significantly reduce the time it takes to draw inference from an ML model.
  • ML Insights works with Amazon QuickSight, a BI Service for interactive dashboards. ML Insights adds ML-powered anomaly detection, ML-powered forecasting, and Auto-narratives (add text descriptions automatically) to QuickSight dashboards.
  • AWS Personalize is managed service for building and consuming recommendations models. It does the heavy lifting needed to design, train, and deploy a machine learning model under the covers.
  • AWS RoboMaker provides a robotics development environment for application development (open-source robotics software framework, Robot Operating System,ROS), a robotics simulation service to accelerate application testing, and a robotics fleet management service for remote application deployment, update, and management.

AWS IoT services matured and can now connect to more things

  • AWS IoT Core can now ingest data directly, bypassing the MQTT broker, by having the thing publish data to $aws/rules/ruleName topic. This eliminates the additional time and cost of publishing data to an IoT topic before it reaches the rules engine for desired processing.
  • AWS IoT SiteWise opens up AWS IoT to data from industrial devices. It runs on a gateway that resides in customer's facilities and automates the process of collecting and organizing industrial equipment data.
  • AWS IoT Events - Managed service to analyze patterns in IoT data respond accordingly.
  • AWS IoT Things Graph can be used to connect devices and web services to build IoT applications. With this service, one can define interactions between them to build multi-step automation applications.

Serverless computing saw some important improvements

  • AWS Lambda now natively supports Ruby.
  • Lambda now supports custom runtimes (any runtime that can run on Unix) via Lambda Runtime API. C++ and Rust are now supported on Lambda using this new feature. Some other languages that third-parties have enabled on AWS Lambda are Erlang, Elexir, COBOL, and PHP. This feature will certainly encourage migration of legacy code to Lambda.
  • A new feature of Lambda called Layers allows lambda functions to share code and data. For example, if several Lambda functions use a common library, that library does not need to deployed (duplicated) for each these Lambda functions. Instead, the library can be pulled in from a remote repository.
  • Step function (orchestration of Lambda functions) can now invoke many AWS managed services (such as DynamoDB, AWS Batch, Amazon SQS, and Amazon SageMaker) directly in a defined flow.
  • A new service called AWS Serverless Application Repository allows cataloging/discovery/assembly of a serverless application from existing Lambda functions.
  • Lambda functions can now be placed behind an Application Load Balancer. This allows Lambda functions to be invoked directly via HTTP/HTTPS without having to use the API Gateway.
  • Firecracker is a lightweight virtualization that is based on KVM. Amazon uses this technology internally to for its AWS Lambda offering as well. According to AWS, this service allows "launching of lightweight micro-virtual machines (microVMs) in non-virtualized environments in a fraction of a second, taking advantage of the security and workload isolation provided by traditional VMs and the resource efficiency that comes along with containers".
  • AWS App Mesh - For monitoring and controlling communication across microservices on AWS such as ECS, EKS and Kubernetes running on EC2.
  • API Gateway now supports Web Sockets. For Single Page Apps, live updates from the Server are usually sent over Web Sockets. This makes API Gateway more desirable as a backend for interactive SPAs.
  • SNS now supports filtering of messages that are published to a given SNS topic. This can help discard undesirable messages at the SNS service level thus reducing traffic to a configured SNS recipient such as AWS Lambda or a microservice.

Databases and Storage

Amazon Aurora, a MySQL-based managed database service, was featured prominently in Werner's keynote. Amazon has famously vowed to get rid of all its Oracle databases. I imagine Aurora will replace a good number of these databases.
  • Amazon Aurora added a Global database feature that is designed for applications with a global footprint. It allows a single Aurora database to span multiple AWS regions, with fast replication to enable low-latency global reads and disaster recovery from region-wide outages. I imagine one of the main motivations for adding this feature was to match Microsoft Azure Cosmos Database's globally distributed storage model.
  • Amazon DynamoDB added ACID-compliant transactions across multiple tables in a given AWS region. This is important for applications that need to store data reliably across multiple tables in a single transaction. DynamoDB also added an On-demand pricing model where the application does not need upfront capacity planning (read/write capacity units).
  • Amazon Timestream is a new database offering optimized for storing timestream data and is more cost effective than other storage options such as RDS. This is an attractive option for storing large amount of streaming data such as Telemetry data from IoT devices.
  • Amazon had previously introduced AWS Glue to discover and catalog structured and unstructured data to aid in building of a Data Lake. Amazon has now introduced AWS Lake Formation, that sits on top of AWS Glue, and makes the job of configuring data sources and governance of the source data much simpler.
  • S3 added intelligent tiering which automatically moves data to different pricing/availability tiers of S3 based on S3 object access patterns.
  • AWS Transfer for SFTP is new fully managed SFTP service S3. This allows access to data stored in S3 buckets through SFTP protocol.
  • Amazon introduced Amazon FSx for Lustre, a fully managed file system for use with Lustre, a file system used for large-scale cluster computing. Similarly, Amazon FSx for Windows File Server delivers a managed Windows file system (supports SMB, NTFS and AD) for use with workloads on Windows Server.

Amazon finally gets into the Blockchain game

Two Blockchain related services were announced.
  1. Amazon Managed Blockchain is a fully managed service that makes it easy to create and manage scalable Blockchain networks using popular open source frameworks Hyperledger Fabric & Ethereum.
  2. Amazon Quantum Ledger Database (QLDB) is a purpose-built ledger database that provides a complete and verifiable history of application data changes. The database is append only/immutable (can't be edited) and cryptographically verified (to ensure contents have not been tampered).

Finally, there were new offerings in the area of DevOps and Security

  • AWS CodeDeploy now supports Blue/Green deployments for AWS Fargate and Amazon ECS.
  • AWS Security Hub enables AWS customers to centrally view and manage security alerts and automate compliance checks within and across AWS accounts.
  • AWS Control Tower helps create and maintain secure, well-architected multi-account AWS environments with respect to configuration of organizations, federated access, centralized logging, IAMs auditing, and workflows for provisioning new accounts.
  • AWS Well-Architected Tool can review state of workloads and compare them to the latest AWS architectural best practices.
  • AWS Outposts (later in 2019) - An on-premise hardware offering developed jointly by Amazon and VMWare. It is fully managed, maintained, and supported by AWS to deliver access to the latest AWS services on customer's site. It brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility.
Well, that's all for this year. I believe we are nowhere near utilizing the full potential of AI, Machine Learning, and IoT data. I have no doubt we will see many more outstanding innovations in these areas in the near future. We are now well-beyond dynamic websites (Web 1.0) and mobile computing (Web 2.0). Warp speed to AI, ML and IoT (Web 3.0).