Monday, January 27, 2020

ATF Parameterized Testing

Parameterized Testing within Automation testing framework is a feature that allows to run same Tests with different inputs thereby eliminating the need to copy the Tests. 

For example,  this feature comes handy if you wanted to automate the testing of a particular form.  the high level categorization of steps for it would be,

  • Open form
  • Fill fields
  • Submit the form.
Once you have the above steps configured, you can define different parameter test sets using which the steps will run multiple times for different inputs into the fields.  

As of NewYork release, there is a limitation that these parameters can't be accessed with Run Server Side Step.

In this article, I will describe how to define a custom step to take the advantage of parameterized test at Server Side.


New Step Config:


  • Navigate to Step configuration from left navigation menu and create a New Test Step Config





  • Define input variable to this step to pass parameters from the step to the Server script, for this article I have created 2 input variables.




  • Access newly defined input parameters in the Step definition script using the input object,  use these input variables as needed and define script as needed to process these input parameters






  • Lets test the above step configuration definition, so lets create a new Test, I have also defined  Exclusive Parameters and enable Parametersing testing




  •  I have created new Test step from the above defined Step configuration,  and tagged in parameters to the input variables.
 



Now the set up is complete, its to Test it with Test Data sets, I have defined the same and next screen to show the above set up has worked like a charm










Its not ideal to define a step configuration for each time you need to parameters at the Server Side but you can define a generic Test Config to club few use cases into one and also you can also define your input parameters as generic a JSON object and also another input param to pass script to be executed in the step config.  You get the idea.............







ServiceNow Major upgrade

ServiceNow every year releases 2 major versions and it is important to upgrade your ServiceNow implementation so that it remains supported as well as your organisation can benefit from the latest features included in the newer releases.


Based on the size of instance and customization generally a major upgrade cycle will take from 4 -8 weeks. 


Review the release notes and if you have sandbox instance available , use it for the first look and analyse upgrade impact on existing functionality and size out the efforts required to action the skip logs. 

ServiceNow docs site has 7-phase plan upgrade checklist Here is a good starting point planning the upgrade cycle.


The key points from the above checklist include:


  1. Prepare project plan
  2. Prepare test plan
  3. Identify key stakeholders, power users, tests
  4. Identify interfacing teams and environments
  5. Identify the instances to upgrade
  6. Agree on issue tracking mechanism

Dev/Test instance upgrade , shakeout , skip logs reviews:

  • Clone prod over to DEV
  • Upgrade DEV to targeted version
  • Review and Action Skip logs
  • Clone prod over to TEST
  • Upgrade TEST to targeted version
  •  Apply skip log decision update sets.
  • Perform manual testing in TEST instance for the functionality where ATF does not have coverage
  • Run Automation test framework
  •  Track issues identified using ATF, manual testing
  • Fix the issues

Tips:

  •  Include audit data and attachments in the clone

  • Use Check Now button on the upgrade Monitor screen if the upgrade did not kick in at the scheduled time.

Skip log reviews

During the upgrade process, if the system identifies a conflict, i.e the upgrade has an update to a file that has been modified by the customer,  it skips that particular file update and generates a skip log. Customers are responsible reviewing the skip logs and take appropriate actions.


  1. Reviewed and Retained
  2. Reviewed and Reverted
  3. Reviewed and Merged
  4. Reviewed
  5. Not Reviewed

Reviewed and Reverted and Reviewed and Merged will modify the underneath application file, capturing these changes to the application file for the TEST instance promotion.


Tip : 

  • Group these skip logs by priority and action the highest priority items first from the list.
  • Make sure to right scope update set is selected while applying Review and Reverted, Reviewed and Merged decisions
  • Use the comments field to capture rationale behind the decision.



















Sunday, January 19, 2020

Bulk download attachments ServiceNow


Below Script can be used to bulk download attachments from ServiceNow.

import requests
import os
import concurrent.futures
import json

tasks =[]

def make_folder(folder):
folder = f"data/{folder}"
if not os.path.exists(folder):
os.makedirs(folder)

uri = "https://instance.service-now.com"
api = "/api/now/table/"
header = {'Content-Type': 'application/json'}


def write_to_file(file_name, r):
with open(file_name, 'wb') as f:
for chunk in r.iter_content(chunk_size=1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.flush()
os.fsync(f.fileno())

def process_attachment(obj):
attachment = obj['attachment']
attachment_sys_id = attachment['sys_id']
attachment_file_api = f"{uri}/api/now/attachment/{attachment_sys_id}/file"
binary_response = requests.get(attachment_file_api, **options)
write_to_file(f"data/{obj['number']}/{attachment['file_name']}",
binary_response)


def process_tasks(tasks):
jobs = []
with concurrent.futures.ThreadPoolExecutor(max_workers=40) as executor:
for task in tasks:
jobs.append(executor.submit(process_attachment, task))

for job in concurrent.futures.as_completed(jobs):
print(job.result())

def prepare_options():
user_name = "username"
password = "pwd"
options = {
'headers': {'Content-Type': 'application/json',
},
'auth': (user_name, password),
'timeout' : 50
}
return options
tables = ['incident']
options = prepare_options()
for table in tables:
url = f"{uri}/{api}/{table}"
response = requests.get(url, **options)
response = response.json()
response = response['result']
for record in response:
sys_id = record['sys_id']
numbr = record['number']
attachment_meta_api = f"{uri}/api/now/attachment?sysparm_query={table}
^table_sys_id={sys_id}"
a_response = requests.get(attachment_meta_api, **options)
a_response = a_response.json()
attachments = a_response['result']
if len(attachments):
make_folder(record['number'])

for attachment in attachments:
obj = {
'attachment': attachment,
'number': numbr,
'options': options
}
tasks.append(obj)

process_tasks(tasks)






The above python 3 script downloads attachments into local machine  stores them into data folder.

Inputs:


  1.  Tables  = List of task related tables that you want to download attachments from.
  2.  URI =  Instance URI
  3. userName = userName to access the instance.
  4. password = pwd to access the instance
Ensure provided user has read access to the task table. 







GRC Scripted Control Indicators

Control indicators provides a way to monitor control objective /risk Statement automatically and collects the relevant data for auditing purposes.


Control indicators can be Manual, Basic, Scripted.

Manual, Basic indicators are fairly straight forward and scripted ones tare the ones hat I am going to go through in this article.


Why and when scripted indicators required ?


Scripted indicators provides a way read the data from any part of within the platform or through integrations outside of the platform and interpret the data to derive to a conclusion whether an Item is still effective.


Objectives available for scripting :


result:   After the data collection and data interpretation once we determine the outcome and supporting data for the outcome we will have to set this values to the result variable

For example:
result.passed = true;
result.value = 500;     
result.supportingDataIds = [id1, id2, ...]     

Notes:

  1. result.passed expects either the pass or fail at the end. you must set one of this value
  2. result.value, this is for the auditing purpose,  A Value that can help you understand end result
  3. result.supportingDataIds , Array of record sys_ids from the table that has been selected in the Supporting Data table.

current:  The item that being monitored (Conrol Objective/Risk Statement) from the control indicator definition is available for access as current object








Example:

Below example  indicator is defined to run weekly to verify f incidents created last week have meaningful description for a particular business service.

The current object is used to fetch information of the the profile/entity the control objective is associated with.

Once we have the entity from the current, navigating there on to a business service that the profile is defined on to.

Once we have the business service, applying that in the encoded query to fetch incidents related business service created with in last 7 days and verifying if all those incidents have meaning description.



var count = 0;

var profile = current.profile || {};
var applies_to = profile.applies_to;
var supportingDataIds = [];

var query = 'sys_created_onRELATIVEGE@dayofweek@ago@7';
//created relative 7 days ago
query = query + '^business_service=' + applies_to;
var table = 'incident';
var inc = new GlideRecord(table);
inc.addEncodedQuery(query);

while (inc.next()) {
if (((inc.short_description + '').toString().length < 10) ||
((inc.description + '').toString().length < 20)) {
count++;
supportingDataIds.push(inc.sys_id + '');
}
}
if (count > 2) { //Mark failed id more than 2 incident records found
result.passed = false;
result.value = count;
result.supportingDataIds = supportingDataIds;
} else {
result.passed = true;
result.value = count;
}

The variable i.e count is used to track the number of incidents that did not meet the criteria , and at the end it is being used to set the result variable.

Most common interview questions and answers

 1. Tell me about yourself? This is [Your name] ,   I am a  specialist/expert in [your area] with [no of years]  experience and the [so and...