Monitoring and Evaluation Skills Training & Consultancy

Dolphins Group - Book Now

DEFINITIONS

1.1 Monitoring

Monitoring is an integral part of day-to-day operational management to assess progress against objectives.

 

  • It involves the tracking of inputs, processes, activities, outputs and outcomes against indicators, and the modification of these processes and activities as and when necessary.

 

  • The aim of monitoring should be to support effective management through reports on actual performance against what was planned or expected.

 

  • Monitoring tools are essentially used for the early identification of problems and the solving of these problems as and when they occur.

 

  • Monitoring is based on information collected before and during the operations.

 

  • Information required for monitoring may be entered into and analyzed from a project management system (PMS) or a management information system (MIS) or any other similar tool.

 

  • The accuracy of the information collected for monitoring purposes, and ways to assess the accuracy of the information are important aspects of monitoring. Monitoring usually precedes, leads up to and forms the basis for evaluation, that findings from monitoring may be used as part of evaluation, but evaluation tools may also be used for monitoring.

 

Steps in a Monitoring Process

For a monitoring system to work effectively, it requires the development of a management information system (MIS) for data capture, storage, retrieval and analysis.

This could be based on manual and/or electronic templates. It may be advisable to develop electronic templates for more complex monitoring requirements.

For instance, to date, there are over 150 social interventions across Trinidad and Tobago aimed at improving the lives of the most marginalized segments of society.

These interventions are carried out through several Government Agencies and reach thousands of beneficiaries. The data for such interventions should be available in electronic formats and stakeholders should be able to access the data at any point in time.

 

Role of Performance Indicators in Monitoring

Monitoring is relatively straight forward if right from the on-set of an intervention, thought is given to developing indicators for the defined objectives.

The data collected should be based on the agreed indicators. The information derived could then be used to improve the activities

In conclusion, it should be underlined that routine collection of intervention data is necessary. It helps to improve on programme management and performance and it enables us to know how well a programme is doing. If lapses are detected midstream, measures could be taken to streamline them.

Moreover, it facilitates accountability in terms of determining if established policies and procedures are being adhered to.

 

PROGRAMME PLANNING AND DESIGN

This session focuses on programme initiation, planning and design. The underlying assumption is that evaluation should be a key consideration when programmes are being designed.

It should form an integral part of the programme to facilitate the collection of appropriate data to inform decision-making. There will be emphasis on conceptual models, logical framework, writing of goals and objectives, identifying indicators, and defining activities.

Also, participants will be introduced to programme theory, in particular how to identify programme assumptions and logic, including establishing causal relationships. This should help in having an understanding of how a programme works. This is because the underlying assumptions of a programme may or may not be valid.

There is a need to review a programme theory in order to establish plausible explanations for what has worked and what has not worked. Considerable thought must go into conceptualizing and designing a programme. Programme theory or assumptions underlying a programme/project should be carefully considered.

This module is predicated on the assumption that what goes into the initiation and preparation of a programme would go a long way in determining its ultimate success in terms of meeting the goals and objectives for which it was designed. In particular, if the programme is well designed, it becomes relatively straightforward to initiate formative and/or summative evaluations. The purpose of this module is to review the various pre implementation steps programme managers should follow before designing a programme.

 

Evaluation

Evaluation is a decision-making tool to be incorporated into the planning cycle and the performance management of government.

  • Evaluation is a systematic assessment of the strengths and weaknesses of the design, implementation and the results of completed or ongoing interventions.

Its aim is to help to improve these interventions.

  • The main objective of evaluation is to supply information on lessons learnt from work already done to influence future planning.
  • Evaluation is a systematic process with key indicators or criteria against which to evaluate the work done.
  • Inputs, activities, outputs outcomes and impacts are components of the evaluation process. Ways to evaluate inputs, activities, outputs outcomes and impacts are essential components of M&E. Various phases of an intervention may need to be evaluated, for example evaluation of a project at a particular milestone, at the end of a financial year, or at the end of the entire project. Impact evaluation may need to take place at a specified period in time after a project has ended.

 

Monitoring Evaluation
Conducted: Ongoing Conducted: Periodic
Focus: Tracking performance Focus: Judgment, learning, merit
Conducted internally Conducted externally or internally, often by another unit within the organisation
Answers the question: What is going on? Answers the question: Why do we have the results indicated by the monitoring data?

 

WHY IS MONITORING AND EVALUATION IMPORTANT?

Monitoring and Evaluation (M&E) processes can assist the public sector in evaluating its performance and identifying the factors which contribute to its outcomes.

M&E helps to provide an evidence base for public resource allocation decisions and helps identify how challenges should be addressed and successes replicated.

 

2.1 Four key uses of M&E information

Listed below are four broad categories of applying M&E within the public sector.

  1. M&E can support budgeting and planning processes when there are often many competing demands on limited resources in this way M&E can assist in setting priorities. Terms that describe the use of M&E information in this manner include evidence-based policymaking results based budgeting, and performance-informed budgeting.
  2. M&E can help government Departments in their policy development and policy analysis work and in programme development.
  3. M&E can aid government departments to manage activities better. This includes government service delivery as well as the management of staff.
  4. M&E enhances transparency and supports accountability by revealing the extent to which government has attained its desired objectives.

 

2.2 Practices that promote useful M&E systems: Some thoughts for discussion

The M&E system:

  • Generates information that is shared within the organisation. One way of doing so is the use of M&E Forums which are being successfully used in some provinces, although there are other mechanisms available, such as learning circles among others..
  • Is integrated with existing management and decision-making systems.
  • Includes an inventory of the institutions current M&E systems, describing their current status and how they are to be improved as well as mentioning any plans for new M&E systems.
  • Encompasses the organisations approach to implementing the Programme Performance Information Framework in preparation for audits of non-financial information,
  • Fits with the organizational structure. The optimal organisation structure for M&E will differ from organisation to organisation. Some organizations may prefer a centralized, specialized M&E unit. Others may opt to decentralize M&E functions to components within the organisation.
  • Has sufficient prominence within the organisation. Sufficient authority to officials with M&E responsibilities can ensure that M&E findings inform policy and programmatic decision-making and resource allocation.
  • Is built on good planning and budgeting systems and provides valuable feedback to those systems. How M&E processes relate to planning, budgeting, programme implementation, project management, financial management and reporting processes are clearly defined.

 

  1. OVERVIEW: Six Steps to Developing an M&E System

 

STEP ONE Specify the intervention: Agree on what youre aiming to achieve and specify the inputs, processes, activities, outputs, outcomes and impacts. This is called the programme logic of the intervention.

 

STEP TWO develop the most appropriate indicators. These should be measurable

 

STEP THREE develop a data collection strategy use existing information sources or develop new tools

 

STEP FOUR Collect baseline and set realistic performance targets

 

STEP SIX Use the monitoring data for evaluation, planning and management, and reporting

 

STEP FIVE monitor the implementation of your intervention by collecting data

 

  1. STEP ONE: Specify the Intervention

 

Specifying the intervention essentially means clarifying what you intend to do, how you are going to do that, and what you expect to see as a result of your activities. This requires working through a number of steps, namely:

 

  • Understanding the problem that you are attempting to address and the context in which you are working:-

 

  1. Identify what is the priority problem or issue that the intervention is trying to address.

 

  1. What are the possible causes of the main problem? Consider levels of causes. When developing the levels of causes ask yourself the question Why? For example if your priority problem is: A lack of skilled professionals, start by asking the question: o Why? Because few learners participate in FET o Why? Because there is inadequate or no career guidance and subject advice offered o Why? Because there is insufficient training of teachers

 

Understanding the Context: Conducting a Situational Analysis Conducting a situational analysis is a way of systematically establishing the core problem that has been identified. A situational analysis seeks to identify:

  • Gaps in service delivery where there is a lack of services or services are not being delivered in the manner planned;
  • The extent of the problem and the needs of the target audience;
  • The most effective strategy for implementation.

 

  • Developing goals and objectives for the intervention

 

The Goals and Objectives Various organizations structure their plans in different ways. However, all structures follow a hierarchical design of goals, objectives and activities. In the diagram below the broad goal is further defined and broken down into more specific objectives which are then broken down into detailed and focused activities. This shows the logical links between your activities, objectives and goals, i.e. if you conduct the activities you plan, you assume that you will achieve a specific objective and if you achieve all the objectives you have set it means that you will attain your goal.

 

Well-defined strategic goals and strategic objectives provide a basis from which to develop suitable programmes and projects, as well as appropriate indicators. A strategic goal is a general summary of the desired state that an intervention is working to achieve. Strategic goals should meet the following criteria:

 

  • Forward looking: Outlining the desired state toward which the project is working.

 

  • Relatively General: Broadly defined to encompass all project activities

 

  • Brief: Simple and succinct so that all project participants can remember it

 

 

Measurable Objectives: What do we wish to achieve?

 

Measurable objectives are specific statements detailing the desired outcomes of an intervention. If the project is well conceptualized and designed, realization of a programme or projects objectives should lead to the fulfillment of the strategic objective and ultimately the strategic goal. A good measurable objective meets the following SMART criteria: specific: The nature and the required level of performance can be clearly identified.

 

  • Measurable: The required performance can be measured.

 

  • Achievable: The target is realistic given existing capacity.

 

  • Relevant: The required performance is linked to the achievement of a goal.

 

  • Time-bound: Achievable within a specific period of time. In addition to the SMART criteria, it is very helpful to state your objective in terms of the change that you would like to see instead of just specifying an activity. For example, referring back to the FET example it is better to write the objective as: To ensure that 80% of students in financial need are supported to attend an FET institution rather than: To provide 80% of students in financial need with bursaries. The second objective looks very much like an activity and does not specify the change that is intended.

 

  • Plan Activities:

 

What we do Activities are specific actions or tasks undertaken by staff designed to reach each of the interventions objectives. A good activity meets the following criteria:

 

  • Linked: Directly related to achieving a specific objective

 

  • Focused: Outlines specific tasks that need to be carried out

 

  • Feasible: Accomplishable in light of the projects resources and constraints

 

  • Appropriate: Acceptable to and fitting within site-specific cultural, social, and biological norms

 

Establish the Inputs: What we use to do the work Inputs are all the resources that contribute to the production and delivery of outputs. Inputs are “what we use to do the work”. They include finances, personnel, equipment and buildings. In managing for results, budgets are developed and managed in line with achieving the results, particularly the output level results.

 

Specifying appropriate outputs often involves extensive policy debates and careful analysis. The process of defining appropriate outputs needs to take into consideration what is practical and the relative costs of different courses of action. It is also important to assess the effectiveness of the chosen intervention

 

 

  • Conceptualizing the expected results

 

Monitoring and evaluation terminology can be confusing, particularly the meanings of the words input, output, outcome and impact.

 

This is partly due to the fact that there is not always consistency in how various donors and government bodies use and define the words.

 

What is important to remember is the logic of the process: in other words, does the sequence of results that you have developed logically flow one from the other?

 

 

Identifying Indicators

 

An indicator is a measure of a concept or behavior. An indicator is used as a road map to assess how far and the extent to which specific project objectives have or have not been attained.

There are two types of indicators, namely process and results indicators.

 

  • Process Indicators

Process indicators provide information on the activities that are being implemented in terms of types of activities, the number, who the activities are directed at, etc.

These indicators provide information that would enable us to determine if an intervention is moving in the right direction in order to achieve the stated objectives.

This type of information is collected throughout the life of the intervention. Process indicators are useful for monitoring. Data collected using process indicators help in determining the reasons for the success or failure of an intervention.

 

  • Results Indicators

 

These types of indicators are closely linked to the stated objectives of an intervention. They are meant to provide a framework for assessing whether or not as a result of the intervention there has been a visible change in the circumstances of the beneficiary population.

The extent of change can be measured at the programme level or the population level. Results indicators are expressed as a percentage, ratio or proportion.

 

It provides a basis for assessing the degree of change in relation to the beneficiaries and/or their environment. Although results indicators are meant to measure change, they should not be anticipative. For example: instead of reduction in the number of teenage pregnancies it is more appropriate to write percentage of adolescent girls of 10-19 years old who have had babies in the last year

 

  • Results Indicators and Objectives

 

Since results indicators provide an indication of whether or not an objective has been achieved, it is advisable to include at least one result indicator when designing the intervention.

 

  • Principles for selecting Results Indicators.

 

Indicators should be precise and clear. If indicators are written as percentages both the numerator and the denominator should be specified.

 

  • Criteria for Selecting Indicators

 

There should be emphasis on the selection of indicators that are clear and concise.

 

The following criteria must be considered when selecting indicators: Relevance There should be a clear relationship between the indicator and the objective being measured. Whatever information is collected must be useful to decision-making.

 

More information is not necessarily more useful.

 

Reliability Relates to the stability of the measurement process. The same measurement process should produce the same findings even if the data analysis is repeated several times over.

 

Validity The indicator must be consistent and should represent what is actually being measured. A host of factors can influence the validity of the data being measured including poor design of the data collection instrument, poorly trained data collection staff, measurement errors, poor sampling, transcription errors, etc.

 

Availability of information there should be ready access to sources of data. Ease in measuring the indicator does not require sophisticated methods of measurement Easy to understand the social planner/evaluator must clearly communicate what is being measured and the user must understand what is being measured.

 

Cost effectiveness the cost of data collection in terms of both human and financial resources should be considered when choosing an indicator. It should not be too expensive to collect the data. The basic rule of thumb is that costs associated with evaluation should range between three and ten percent of the total cost of the intervention.

 

Robustness the data that is generated must be reliable and replicable.

 

Timeliness in data collection Data collection and analysis should take place within a well-defined timeframe in terms of the frequency of data collection and the currency of the data.

 

PLANNING A USEFUL EVALUATION

Social interventions are designed to achieve government-wide goals in relation to solving social conditions or problems. Evaluations are initiated to determine the effectiveness of such interventions in terms of achieving the desired goals and objectives.

Participants will be introduced to simple techniques for planning useful evaluations. Participants will be taken through the essential steps in initiating, planning and undertaking an evaluation, including the evaluation process, types of evaluations, evaluation assessment and scope of the evaluation.

 

The Evaluation Process

Determining what questions to select for evaluation is not easy, more so if there is no process in place to facilitate the choice of issues on which to focus.

Ideally, there should be a mechanism to initiate the evaluation in terms of determining the issues, the preparation and approval of the scope of work, the undertaking of the evaluation, the submission of the final report to the initiator of the evaluation for approval, and the implementation of the recommendations.

Types of Evaluation

An evaluation could focus on any of the following:

  1. Projects
  2. Programmes
  3. Themes
  4. Sectors
  5. Country
  6. Programme effectiveness
  7. Programme efficiency
  8. Programme impact
  9. Programme sustainability

 

  1. Project

This is a single intervention with defined goals and objectives, which can be implemented in a specific location or in several locations (towns, villages, communities, etc.) For example, an intervention designed to combat high incidence of teenage pregnancy in Morvant.

  1. Programme

A programme consists of several activities or projects with defined goals and objectives that aim at improving the social and economic circumstance of the beneficiaries.

An example is the SHARE programme. Note that both projects and programmes can be subjected to mid-term and final evaluations. A mid-term evaluation is undertaken in order to determine the overall effectiveness, efficiency, and impact of a project or programme. Findings from a mid-term evaluation could result in making changes to the project/programme. A final evaluation or ex-post evaluation is an assessment of a project/programme after it has been completed.

  1. Thematic Evaluation

This type of evaluation focuses on selected interventions within a sector that addresses specific priorities. For example: teenage mothers, drug addicts, and alcoholism.

  1. Sector Evaluation

Sector evaluation focuses on a cluster of development interventions. For example: health, education, training, agriculture, micro-enterprise, etc.

  1. Country Evaluation

This type of evaluation is common with donor-funded programmes/projects. A donor organisation can decide to evaluate its activities in a given country

  1. Programme Effectiveness

It is often assumed that once a policy/programme is initiated one can expect successful implementation. This is always not the case since the intervention may not have been effectively implemented. This could be due to poor design, inadequate inputs and a host of many reasons. This explains why a key component of evaluation is to focus on how well a programme has been implemented by looking at the inputs, processes, outputs and outcomes.

7.Programme Efficiency

It is often said that knowledge of programme results is not sufficient enough to declare success in producing outputs and outcomes. Results must be measured against their costs. Due to competing demands on the resources of the government, it behooves programme managers to demonstrate that their programmes are not excessively expensive and that all things considered, their programmes are providing value for money.

Programmes could be terminated or retained on the basis of their comparative costs. Of course in the realm of politics, it is not always feasible to kill a programme as a result of inefficiencies or cost over runs. Inefficient programmes may be kept purely for political expediency.

  1. Programme Impact Assessment

According to Peter H. Rossi et al, the central premise of any social programme is that the services it delivers to the beneficiary group should induce some change that improves social conditions.

Impact assessment determines the extent to which a programme delivers on the intended objectives in such a way that it results in improvements in the social conditions of the target beneficiaries. Some questions to consider when doing an impact assessment: To what extent could programme outcomes be attributed to the intervention? To what extent did the programme succeed in producing change in the social conditions of the beneficiaries? To what extent can one attribute changes that have occurred to the specific interventions? These questions seek to establish a cause and effect relationship. The bottom line is to establish the net effect of an intervention. In order to do so, it is useful to define outcome variables. It may be possible to use classic experimental design

  1. Programme Sustainability

It denotes the extent to which an intervention can continue to be viable and produce benefits after the completion or closure of the intervention.

Other Evaluation Tools and Methods

  • Rapid Appraisal Methods

Rapid Appraisal is a relatively quick and low-cost strategy to gather data from beneficiaries and other stakeholders to feed into an evaluation report, in response to the information needs of programme managers.

This method is flexible and easy to apply. However, the information gathered is largely qualitative and it cannot be extrapolated. It is less valid and reliable than data collected from a formal survey.

  • Participatory Methods

This approach enables those who are directly affected by a programme, or have a stake in it, to participate directly in assessing it. It enables them to have a sense of ownership of the Monitoring and Evaluation findings and recommendations.

This method also allows the stakeholders to give their perspectives and impressions about the usefulness or otherwise of the intervention.

  • Sampling

A distinction must first be made between a total population and a sample of that population. A population contains elements, each of which is a potential case. If an evaluation involves a small population, all the elements can be studied.

  • The Probability Sample

The essence of probability sampling is that each element of the larger population (couple, young, old, male, female, etc.) has a known probability of being selected. If each element has an equal chance of being selected, it is known as self-weighting and the findings can be extrapolated to the general population. The findings resulting from probability sampling are considered to be truly representative.

There are several methods for drawing probability samples. The most common ones are as follows:

  • Simple Random Sampling

In this sampling method, each element of the larger population is given a number and a table of random numbers or a lottery technique is used to select elements, one at a time, until the desired sample size is reached. This approach can be tedious. A list of all elements is called the sample frame.

  • Systematic Sampling

This is a modification of simple random sampling which is less tedious and less time consuming. The estimated number of elements in the larger population is divided by the desired sample size, yielding a sampling interval.

  • Stratified Sampling

Stratification can be used for either simple random sampling or systematic sampling to ensure the desired representation of specific sub groups. For example, elements in the larger population can be arranged by age, education, income, location, profession, political affiliation, etc.

  • Cluster Sampling

This method is used to simplify sampling by selecting clusters of elements, using simple random, systematic, or stratified sampling techniques and then proceeding to study all the elements in each of the sampled clusters. Usually the clusters are geographic units such as provinces, districts, towns, villages, units or organizational units such as Centres, clinics, or training groups.

  • NonProbability Sample

This is also known as convenience sampling. It refers to the selection of cases that are not based on known probabilities. The selection could be accidental (choosing from whatever case that is available) or purposive (selection from specific type cases). This type of sampling is not representative of the larger population since there can be over selection or under selection of cases. If it is too expensive to use probability-sampling technique, then non-probability sampling may be the most appropriate method to use.

  • Sample Size

The size of the sample is determined by two main things, namely the availability of resources and the proposed plan of analysis.

 

DATA COLLECTION AND ANALYSIS

Decisions affecting programmes and development of policies must be informed by the availability and use of credible data. The basic rule of thumb is that evaluation should be based on empirical evidence; therefore, there should be a well-laid plan for gathering and analyzing data. There are various approaches that facilitate the collection of appropriate data.

This module focuses on strategies for collecting quantitative and qualitative data. In broad terms, quantitative data tends to be precise and qualitative data tends to provide descriptive information.

  • Quantitative Data

Findings from quantitative data tend to have numerical values. Data collection instruments can be based on any of the following approaches:

  1. Sample surveys which can be based on interviews, using a standard questionnaire. All respondents are asked the same set of questions.
  2. Direct observation using service statistics and other programme documents.
  3. Self-administered questionnaire Not ideal for a less educated population and tends to have a low response rate.
  4. Secondary data sources (official records, census, official statistics, etc).

Advantages

Quantitative data tends to be flexible, reliable and allows for international comparisons. The most commonly used software to analyze data are SAS, SPSS and Epi Info. When using a quantitative approach, keep the following in mind:

Use a simple language so that respondents will be able to answer the questions without any difficulty.

  • Train interviewers and field supervisors prior to administering the questionnaire.
  • Pre code the responses to facilitate transfer of information and analysis.
  • Avoid embarrassing questions.
  • Pretest the questionnaire before administering it.
  • Ask respondents the same questions that have been tested.
  • Add information from qualitative interviews.
  • Qualitative Data

Qualitative data can be collected using the following key approaches:

  1. In depth Interviews – Usually there is a guide or a set of questions to facilitate collection of information from respondents. The guide helps to standardize the questions being asked so that there is uniformity in analyzing the responses.
  2. Focus Group Discussions Respondents are brought together for open discussions on a set of issues prepared in advance. There is a facilitator who helps to guide the discussions as well as a rapporteur who takes notes. It is recommended that focus group discussions involve 8-10 participants; however, at the outer limits there should be no less than 5 and no more than 12.
  3. Direct Observation This is often used to assess service delivery points to determine quality of service provision. It requires highly skilled observers and analysts such as ethnographers.
  4. Case Studies these normally concentrate on a small number of cases, which are examined in depth. Case studies can examine one moment in time and one event or processes that evolve over long periods of time.
  5. Content Analysis of written materials – this is useful for analyzing training materials.

 

Advantages of Quantitative and Qualitative Data:

Quantitative Qualitative
Data is consistent and provides a basis for national and international comparisons

It is cost effective for collecting data from a large population

Provides standardized responses

Suitable for collecting data from people who are less educated It is ideal for a large sample size

It is less time consuming.

It makes it possible to collect information from respondents whose views are based on gut feelings Helps to probe social and cultural attitudes

Allows for probing for unintended results

Allows assessment of goals that are not amenable to quantitative analysis. For example: empowerment, self-esteem, negotiation skills, etc.

 

Tips for Quantitative and Qualitative Data

To ensure high quality of data, prepare written guidelines for data collection. The guideline will ensure some degree of standardization in the data collection process. Also, pilot-testing should not be done in an area where the questionnaire will be administered.

  • Coding

It may be useful to develop a codebook as part of designing a questionnaire. A numerical or symbolic code may be assigned.

  • Data Analysis

Once data is collected, either qualitative or quantitative, the next step is analyzing it. There are various techniques for analyzing both qualitative and quantitative data.

  • Analyzing Qualitative Data

Qualitative data is often presented in a narrative form. It is not always feasible to assign a code or even a numerical character to qualitative data. Instead qualitative data can be coded as categories (thematic coding) and presented as a narrative. The following shows how qualitative data can be categorized and presented:

  1. Case Studies Based on narratives or interpretations of respondents understanding of the workings and benefits of an intervention.
  2. Process Analysis – Visual depiction of a programmers processes and outcomes.
  3. Causal Flow Charts Shows how things work.
  4. A decision Tree Model This graphically outlines the realm of choices and priorities that go into decision-making.
  5. Taxonomy A visual representation/diagram showing respondents relate categories of language and meaning.
  • Analyzing Quantitative Data

This usually involves mathematical calculations through the application of statistics. The most commonly used statistics are descriptive and inferential statistics.

  • Descriptive Statistics

Descriptive statistics is the first step in quantitative analysis. Descriptive statistics are used to describe the general characteristics of a set of data. Descriptive statistics include frequencies, counts, averages and percentages.

 

This method is used to analyze data from monitoring, process evaluation and outcome /impact evaluation.

  • Frequencies

A frequency denotes a univariate (single variable) number of observations or occurrences.

  • Inferential Statistics

Inferential statistics allow the evaluator to make inferences about the population from which the sample was drawn, based on probabilities or stratified sampling. Testing for statistical significance helps to ensure the differences observed in data, however small or large, were not due to chance.

Thank you.

 

Dolphins Training & Consultants ltd

View Park Towers ,10th Fl ,Utalii Lane & L584-off UN Avenue, Gigiri.
P O Box 27859 00100 Nairobi, Kenya Tel +254-20-2211362/4/5 or 2211382 Cell+254-712-636404
training@dolphinsgroup.co.ke www.dolphinsgroup.co.ke

Your No.1 Corporate Training Partner DIT No./ 711

 

We push the human race forward and so do you….. Unleash Your True Potential….!