Geospatial integrity of geoscience software
Coupled with the growth in agile software development practices and digitalisation, there are more applications than ever handling spatial data and therefore more opportunities than ever for it to go wrong. The impact of failures in geospatial data integrity can have serious consequences, whether impacting on business decisions or worse, resulting in a safety-related incident if data is mispositioned. GIGS is an open-source digital testing framework designed to evaluate the capability of software in establishing and maintaining the integrity of geospatial data.
It is primarily aimed at geoscience applications, but elements can be readily applied to any software that handles spatial data. The testing framework comprises a series of qualitative evaluations that assess software functionality and configuration, coupled with data-driven tests that quantify the accuracy and robustness of geodetic engines and libraries, in executing coordinate operations.
The testing covers various components of application functionality including how data is manipulated and exchanged, how metadata is assigned and how the user interface visualises spatial data. GIGS 2. Users and purchasers of geospatial applications are increasingly requesting for evidence of integrity testing in their implementations — GIGS provides the defacto standard for comparing and benchmarking functionality, on a completely free-to-use basis.
This seminar will provide an overview of the importance of data integrity in the geospatial profession, followed by advice and guidance on how to use the GIGS testing framework to implement software and data models in accordance with industry best practice. The session is relevant to both users and developers of geospatial software and will serve as a foundation on how to get started with geospatial testing.
No prior geospatial development experience is necessary and the session will be useful even if geodesy has always been a mystery in your operations. Josh has 10 years experience in the energy industry working in both survey data acquisition and QC operations, as well as geospatial data science and managing digital solutions. Data Operations Surface and The range and accuracy of coordinate operations and data manipulations Wellbore Deviation Data pertaining to wellbore data supported within the application.
The audit trail for coordinate and data operations carried out within the Audit Trail geodetic engine of the application. The deprecation of algorithms and files within the geodetic engine of Deprecation the application. Within each Test Series several tests are presented - typically between 10 and 20 individual tests per series.
These tests refer to a specific element of functionality or capability being evaluated. For each relevant test, the Evaluator should select the criterion that describes the conditions most closely aligned with the application being tested. In some cases, a test may be associated with a Test Procedure utilising the Test Dataset see Section 2. Tests are numbered sequentially and, where relevant, sub-numbered if multiple tests pertain to one particular piece of functionality i.
The GIGS Test Series is provided in both online and offline formats, as an online platform and spreadsheet respectively. The online platform offers enhanced functionality for storing and calculating results and managing evaluation details, however the Test Series content is identical in both the online and offline versions. Hence, the same result is derived using either system.
Nonetheless, it is strongly recommended that the online platform is used where possible, and this document refers primarily to operation of the online platform. See for a full description of scoring and associated definitions of geospatial integrity, however the basic principles are outlined below. No GIGS Score directly compromises geospatial integrity no capability to perform coordinate operations, but maintains Elementary geospatial integrity limited capability to perform coordinate operations, establishes and Basic maintains geospatial integrity extensive capability to perform coordinate operations and establishes Intermediate and maintains geospatial integrity to a fully satisfactory degree complete capability to perform coordinate operations with additional Advanced features to reduce possibility of compromising geospatial integrity.
Each criterion in the Test Series is associated with a particular GIGS Level, depending on how it aligns with the definitions of geospatial data integrity.
For the GIGS Level, a percentage score is assigned, to give further indication of the software functionality, indicating how the application performs for that level. A GIGS Score is assigned for each Series rather than calculating a single total score for the entire application, however a sum of all scores can be derived if required. The scoring calculation algorithm is embedded in source code of the online and offline Test Series; and can be made available on request.
Nonetheless, geospatial integrity is still relevant for software that has no functionality to perform coordinate operations. Evaluators assessing this type of software should ensure that the Condition Test A see Section 1. Thereafter, all Basic, Intermediate and Advanced tests will be disabled.
In the offline spreadsheet, if Elementary tests are disabled then they are greyed out and should be ignored note that the text may still be visible but should not be considered in any tests. The following guidelines focus mainly on the online platform, although the same principles apply to the offline spreadsheet version.
The following workflow is recommended for completing the Test Series: 1 Collate application details including organization, version number, testing date.
Criteria for each test are structured to allow for methodical and logical assessment of the responses. Generally, the criteria are structured as:. The assignment of a criterion to a particular GIGS Level corresponds with the associated definitions of geospatial integrity for that particular subject and classification see Section 1.
Note that every test may not have the full range of criteria levels Basic, Intermediate, Advanced , for example some tests may refer only to Intermediate or Advanced functionality. However, all tests will have the possibility of No GIGS Score, to account for any identified failures in geospatial integrity.
In the offline spreadsheet, criteria selection is made by tick box. In the online platform selection is made by selecting the criterion from a drop-down menu. Criteria are colour- coded based on the respective level colour, as outlined in Section 1. Only one criterion may be selected for each test. If there is not a criterion that directly matches the software functionality, then the closest should be selected and a comment made.
Compliance with lower-level classification levels is required in any of the levels. For example, if an Intermediate criterion is appropriate for a particular test, then fulfilment of the requirements for the Basic level is implied. If that is not the case, the Evaluator should qualify the response in the comments. It is highly recommended that supporting comments and evidence are attached to each test, relating to the subject matter being evaluated. This could be clarifying notes, screenshots or even calculation results.
In the offline spreadsheet a comment field is provided alongside each row. Depending on the response to the Conditional tests see Section 1. For Elementary applications, all Basic, Intermediate and Advanced tests will be disabled. In the online platform any disabled tests will not be visible to the user, however in the offline spreadsheet a grey hash will be applied to disabled tests and an error presented if one is selected. All applicable tests should be completed in order to mark the evaluation as complete and to generate the final GIGS score for each test.
Error messages will be displayed alongside any tests that have been missed or answered in error. In the online portal these correlations are referred to in the tooltip, and in the offline spreadsheet they are indicated by comment callouts on the test number cell. The correlation indicators denote which tests are highly correlated, and it is strongly recommended that the selected criteria for all correlated tests should be assessed in unison to ensure agreement between respective tests.
Additionally, a comprehensive flowchart of the Test Series architecture is provided as an offline graphML file or via the Flowchart page on the online portal. In the online portal they are presented to the Evaluator once an application to be evaluated has been set up. In the offline spreadsheet these tests are contained within the Conditionals tab. The Conditionals determine which tests are to be undertaken for a particular evaluation. Depending on the response, certain Test Series or individual tests may be disabled and removed from the evaluation scoring calculations.
The other Conditionals address specific aspects of functionality, such as whether the application supports seismic data or whether it has a user interface. The Conditionals selection can be changed at any point in an evaluation if application functionality changes; however any entries made to tests that are subsequently disabled will be locked.
They can be unlocked by reversing the Conditional response. It is important to answer the Conditional tests as accurately as possible early in the testing process, to ensure a relevant and complete consideration of applicable tests is made. The spatial data concerned primarily includes seismic and wellbore data but could be applied to any geospatial data type.
The first two terms refer to situations where no CRS information is associated with a spatial data file, and the latter to instances where incomplete or partially incorrect information is provided on the CRS.
Tests referring to specific subjects are embedded in the respective Test Series relating to that subject. Tests concern the source and reference of the library, how it is accessed, and version controlled.
The structure of these tests is such that the criteria responses are presented in matrix form, indicating that for each test a response for every geodetic data object should be selected. Tests concern the source and reference of the library and how it is created, updated, managed and accessed. Most other User Interface aspects are embedded in the other Test Series, as they specifically apply to the subject matter of that Test Series.
Conversions of particular interest to the energy industry are included in the test series, however if additional conversions are supported in the application, then it is up to the Evaluator to define and execute additional tests and Test Data can potentially be sourced. Evaluator discretion should be used as to how many additional tests are implemented. Coordinate transformation methods of particular interest to the energy industry are included in the Test Series, however if additional transformations are supported in the application, then it is up to the Evaluator to define and execute additional Test Data and tests which potentially can be sourced from other geodetic testing packages.
Due to the length of the tests in this series, the tests are grouped under subject-related headings, but this should not change the order of test completion. If multiple formats are supported in the software, then each test should be repeated for all format types and any discrepancies in behaviour between the different formats should be noted in the test comments.
Note that there are several Conditional tests see Section 1. In this Test Series a distinction is made between a well path and a wellbore survey. Various alternative terms are in use in the industry, but no standard exists, and the terms proposed in these Guidelines do not constitute a proposal for such standardization. The GIGS Test Dataset is derived using LMP, if an application does not support this curve calculation method then it is acceptable to modify the Test Data so it can be sufficiently imported and processed.
Accounting for Earth Curvature in Directional Drilling. Some software may capture auxiliary metadata but not capture that in an audit trail. Several tests specifically address this scenario. Deprecation is not a widely supported function in most software so this series may be omitted in those cases.
As it is expected that all software will include some error trapping mechanisms, and they will be specific to a particular data theme, the error trapping tests are now embedded in the relevant Test Series to which they are related for example, Error Trapping tests pertaining to wellbore data are included in Series The error trapping tests described in the Test Series aim to capture software behaviour regarding geospatial integrity.
GIGS Test Procedures have a defined set of results and tolerances that should be achieved and are therefore deemed to deliver a Boolean pass or fail outcome. Therefore, in most cases there is a one-to-one file relationship for each Test Procedure, whereby the Evaluator loads data into the application from the input file, then compares the results with the data in the output file.
Where multiple similar routines are to be run for a particular Test Procedure, but different parameters are required e. Associated P-format files are not split into separate inputs and outputs. In the input files, the values that are to be loaded into the application will be populated, and the attributes expected to be calculated will typically be marked as NULL with the exception of where a geodetic parameter field is unpopulated.
In the output files, all attributes will be populated including the input values for reference and expected output values. In this regard, it is possible that only the output file could be used in most Test Procedures as it is a single file containing both the input and output values, with some exceptions.
This is down to Evaluator preference. The Test Data files are indexed in Appendix A. The intention is that these files provide the reference source code for all other Test Dataset files. These are the files primarily maintained by the GIGS authors. The tab-delimited ASCII format is offered as a base class as it is simple, universally consumable and platform agnostic. See specific notes in Section 2. The P-format files contain more explicit definitions of geodetic parameters due to the Common Header and are superior to other classes where Test Data is to be imported, transferred, or exported using an industry standard data exchange format.
Previous versions of P-formats are offered as it is recognized that some applications may not support newer formats, and it is important to be able to test legacy data workflows. Due to the legacy nature of the older P-format files and inconsistencies in how the format has been subsequently adapted and adopted, the construct of the Test Dataset legacy files may not be exactly conformant with what an application is expecting.
Therefore, modifications may need to be made by the Evaluator to ensure the legacy files can be correctly exchanged. The P1-format is suited well to the GIGS Test Procedures in that it explicitly defines geodetic parameters in the header and is flexible in storing point data. Note that for the P-format files, the file extension suffix is set to the vintage of P-format version e.
If the software being tested only accepts the shortened P-format file extension e. The 2D data array consists of the data that is required for the particular test subject and typically contains either a list of geodetic parameters, or a list of coordinates and associated attributes. The data array usually comprises a mixture of integer, float and string values, however within the ASCII files there is no associated schema that defines field types.
At the base class level it is preferable to treat all values as strings text , in order to ensure that numeric representations are preserved as they are stored see Section 2. Additionally, within the data array there may be attributes that are purposefully not populated for example, in an input file where part of a row of data is to be calculated. There are other auxiliary fields that sometimes appear in Test Data for example, identifying points that are out of bounds and should therefore fail.
This is because the output files contain the complete set of results, as well as input data, and operation information such as conversion direction and test result tolerance. The seismic and wellbore test coordinate data are supplemented with additional identifying attributes to assist in loading and categorizing test points.
See the respective P-format documentation for Common Header definitions of the P-format files. The first vertical 1D block of attributes provide identifying details about the file including, its name and associated test number, followed by the version of GIGS that the file was released under, and the version of the EPSG Dataset from which the data was derived.
The second vertical 1D block of attributes contains the field descriptors that define the field name and associated parameters, where appropriate.
These column numbers have an additional function of casting all data array values as strings when the ASCII data is ingested in a spreadsheet program see Section 2. Excel can store numbers to 15 digits of precision, as per the IEE specification15, which means that very small or very large floating point numbers may be rounded or truncated.
A small portion of the GIGS Test Dataset parameters such as ellipsoid axes values exceed this number of digits and hence are not correctly represented when read in Excel. In order to ensure full storage precision is maintained, it is recommended that the ASCII data array is interpreted as a string value, and automatic number interpretation is not applied. Excel has a further idiosyncrasy when loading text files in that numbers and strings are presented as fixed column widths in a workbook, which if the file is then saved, only the data that is visible will be preserved.
Therefore, if the Evaluator wishes to use the Test Dataset in Excel format, it is recommended that a controlled method of loading the data is followed see Appendix F. Note it may be required to manipulate the header slightly for ease of use in Excel. The horizontal field header numbers can also be supplemented with the Field names to allow for easy identification of columns. The hash header may also be filtered out for ease of sorting, however this can be restabilised by adjusting the filter.
See Appendix F for complete instruction on the Excel process. Key portions of the GIGS Test Procedures require that Evaluators verify that the precision of the above parameters, as stored and utilised in the software, are at least as high as that for the corresponding parameters stored within the EPSG Dataset.
Some coordinates for certain tests may be stored and presented to 3 decimal places at least 1 millimetre to account for common rounding errors encountered in applications, however test tolerances are always at the centimetric level, so the additional significant figure is required only for further investigation. Latitude and longitude in decimal degrees or grads are input to a minimum precision of 0. For certain tests some coordinates may be stored and presented to 8 or 9 decimal places millimetric to account for common rounding errors encountered in applications, however test tolerances are always at the equivalent of centimetric level, so the additional significant figure is required only for further investigation.
Geographic coordinate data are not provided in other representations degrees minutes seconds, radians, packed DMS, decimal minutes.
If the application does not support import or export of decimal degrees then the coordinates will need to be converted prior to ingestion, using standard conversion algorithms.
This is a pseudo-unit used in the EPSG Dataset for storing CRS definition parameter values given in sexagesimal degrees degrees, minutes, and seconds as a floating point number in a single numeric field. To gain increased precision, users of the format may have used implied decimals, which generally are not recognised by geospatial software.
Users may have implemented workarounds to generate their own P1 Reader software to manage these limitations. The Test Procedure numbers are four digits long, with the first two digits corresponding to the Test Series number to which the test is related.
The Test Procedure number increases incrementally by 1 for each Test Procedure i. The Test Dataset occasionally may be modified, supplemented or potentially have elements removed. Some legacy Test Procedures have been removed because they are outdated or irrelevant, but in order to preserve the Test Dataset structure and ensure backwards compatibility , number assignments for such Test Procedures are not re-used.
However, this correlation was lost as the Test Dataset was updated over various iterations. This is why, in the current release it may appear that certain numbers are skipped, or in the wrong order.
GIGS codes are integer numbers in the range to In previous GIGS versions test file names also contained the version date, however this was removed to ease file compilation. In the real world, datums and CRSs are stand-alone entities capable of existing without defining their relationship to WGS As per Section 2. To ensure backwards- compatibility in the Test Dataset, such names and codes are not re-used. Hence there may be instances where it appears that codes or names are missing in the order.
These geodetic objects are also used in tests in other Series. The creation of custom GIGS geodetic objects is deliberate, for the following reasons. Firstly, these GIGS definitions mitigate geodetic objects that may exist in predefined libraries within software that have a correct name but incorrect parameters e. Thus, it allows data operations tests to be conducted using the correct definitions, controlled by the Evaluator. Transformation does not form explicitly part of coordinate reference system definition Thus, they should be limited in usage to a specific geographic area.
For example, conversions map projections are tested beyond the projection method extents. Similarly, software is tested to establish which of the overlapping Canadian and US transformation data options is being referenced. This was chosen deliberately to allow full testing of the coordinate operations in Series and , and to minimize the number of projects that need to be created for the Series through Series tests.
Testing for geographic applicability is not included in the tests to date, other than for NADCON and NTv2 gridded transformations across the US-Canadian border, but the Test Data allow for such testing in the future, should this be required. This provides flexibility in the evaluation, in that not all software needs to support the wide range of geodetic data objects that are included in the predefined library tests.
Note that the geographic applicability of the objects does not necessarily apply to the Test Dataset in the Series, as the custom GIGS objects are designed to be used outside of the typical area of use. Therefore, a Test Procedure may require the creation and execution of a coordinate operation, or CRS, outside of the design area of use of the software.
However, in these cases it is the geodetic data object itself e. For example, an application may be designed explicitly not for use in the USA. In addition to Usage Extent, where appropriate, each geodetic data object is also assigned its respective aliases, as recorded in the EPSG Dataset.
Provided that the name is included in the list of aliases, then the individual test can be deemed a Pass, however, use of any other names should be reported. This normally will require the creation of a project within the software. To comply with the test scenarios, these projects need to be referenced to specified CRSs. The coordinate value from each iteration is compared to the tolerance value specified for each Test Procedure, and the number of iterations it takes to exceed the tolerance is reported.
Realistically, no more than iterations are necessary to adequately test the function. The round-trip calculation threshold e. For example, a single calculation could give a coordinate that differs by 0. This is a valid and. A failure in geospatial data integrity may occur if an error is introduced during each operation, and that error then propagates and increases through the entire calculation chain. It is recognised that the round-trip calculation procedures can be cumbersome to implement, and so they should only be executed where the routine can be run programmatically.
Note that details of the input and output Test Data files are for guidance only and refer to the ASCII base class, depending on the chosen, or derived, Test Dataset Format, the actual file names and structure may be different see Sections 2.
It constitutes the minimum set of definitions to be checked. This may be envisaged as a hierarchical series of entities, with higher-level entities being dependent upon lower-level entities. The higher-level entity in this pairing may then be a lower level in a later pairing, for example a geodetic CRS higher level includes a geodetic datum lower-level. Lower-level entities may be used in one or many higher-level entities. The tests begin at the bottom of the hierarchy.
Test Purpose: To verify reference units of measure bundled with the application. This file contains three separate blocks of data for linear units, angular units and scaling units. The values of the base unit per unit should be correct to at least 10 significant figures.
Particular attention should be given to whether the application distinguishes between different types of feet and supports different representations of degrees. If this is the case, it may be possible to convert a coordinate set in base unit to the desired unit in order to compute the effective ratio to base unit. Otherwise, report that the conversion ratio cannot be determined. Test Purpose: To verify reference ellipsoid parameters bundled with the application. It may additionally contain a flag to indicate that the figure is a sphere: without this flag the figure is an oblate ellipsoid.
Equivalent alternative parameters are acceptable but should be reported. The values of the parameters should be correct to at least 10 significant figures.
These must be clearly distinguished. If necessary, the values of these alternative parameters should be calculated from those given in the EPSG Dataset using standard formulae available in EPSG Guidance Note 7 part 2 or in geodetic texts. Equivalent alternative units are acceptable but should be reported. In this case it should be assumed that they are Greenwich. The decimal degree equivalents of the sexagesimal format are also included within the test file.
Tests for component logical consistency should be included: for example, if a higher-level library-defined component such as ED50 datum is selected, it should not be possible to change any of its lower-level components such as the ellipsoid from the predefined value in this example International See also Test Procedure Tests for logical consistency of components should be included.
Occurrences should be reported. Test Purpose: To verify reference conversions map projections bundled with the application. CRSs with the same datum and map projection but differing coordinate system attributes particularly axes order and units are considered to be different CRSs.
Variances from EPSG should be reported. Test Purpose: To verify reference coordinate transformations bundled with the application. CRSs with the same datum but differing coordinate system attributes are considered to be different CRSs. Test Purpose: To verify reference vertical transformations bundled with the application. The Test Procedures in this section should be conducted sequentially as some data loaded in early tests is required in later tests.
The data may be envisaged as a hierarchical series of entities, with higher-level entities being dependent upon lower-level entities. For example, a geodetic datum higher level includes an ellipsoid lower level , and the datum. Lower level entities may be used in one or many higher level entities. The tests begin at the bottom of the entity hierarchy. The fully built-up CRSs and transformations are used for later tests, particularly those in Series , Data Operations.
See Section 2. These default transformations deliberately use the geocentric translation method to ensure broadest applicability. Similarly, vertical datums are, in general, associated with mean sea level.
Test Purpose: To verify that the application allows correct definition of a user-defined unit of measure. Expected Results: The application should accept the Test Data. The order in which the name and the unit factor are entered is not critical, although that given in the Test Dataset is recommended.
Test result will be pass or fail. If fail, details of failure should be reported. Note: Units are defined relative to an ISO base unit: metre for length, radian for angle, unity for scale. These are included in the dataset for reference only. The expected input is the number of base units per unit. It may be described as a fraction formed from two values which EPSG refers to as factor B and factor C numerator and denominator respectively.
If necessary, use the Test Data ratio as the numerator and unity as the denominator. Test Purpose: To verify that the application allows correct definition of a user-defined ellipsoid.
The order in which the name and the ellipsoid parameters are entered is not critical, although that given in the Test Dataset is recommended. The metric equivalent of Test Data non-metric values can be obtained using the unit conversion factor included in the Test Data. Test Purpose: To verify that the application allows correct definition of a user-defined prime meridian. The order in which the name and the meridian parameters are entered is not critical, although that given in the Test Dataset is recommended.
Test Purpose: To verify that the application allows correct definition of a user-defined geodetic datum, using both user-defined data and predefined components. The detailed geodetic definition of origin and orientation is not required.
See Test Procedure Depending on the specific configuration of an application it is recommended that the latest realization is defined G at time of writing , but it is acceptable to construct either the full ensemble or another specific realisation. Test Purpose: To verify that the application allows correct definition of a user-defined geodetic CRS.
Early-bound entities will be used in series tests. Test Purpose: To verify that the application allows correct definition of a user-defined conversion map projection. However if the application cannot create because it has a latitude of origin not on the equator, report the fact and create both and instead.
These are needed for later tests. The order in which the name and the conversion parameters are entered is not critical, although that given in the Test Dataset is recommended. All parameters should be variables. Some applications hardwire the values of some of these variables, e. Latitudes are positive north, negative south; longitudes positive east, negative west. Several different map projections are to be tested. Test Purpose: To verify that the application allows correct definition of a user-defined projected CRS.
The order in which the name and the projection parameters are entered is not critical, although that given in the Test Dataset is recommended. This requirement may be inherited by projected CRSs. See Test Procedures and Test Purpose: To verify that the application allows correct definition of a user-defined coordinate transformation. The order in which the name and the coordinate transformation parameters are entered is not critical, although that given in the Test Dataset is recommended.
Their units may vary. Several different coordinate transformations are to be tested. If this is the case treat this test as part of Test Procedure and report the fact. In these cases, if the software already has the EPSG version of the transformation loaded, then this can be used in place of manually adding the transformation; otherwise, manually add the transformation with the EPSG code specified.
Report this event if it occurs. The data within the grid files are not used in any functionality calculations and tests. This test is designed to evaluate whether user data may be inserted into the application. There is no relationship between the data in the QUE Test Purpose: To verify that the application allows correct definition of a user-defined vertical datum.
The order in which the name and the components are entered is not critical, although that given in the Test Dataset is recommended. This requirement may be inherited by vertical CRSs.
Test Purpose: To verify that the application allows correct definition of a user-defined vertical CRS. Test Purpose: To verify that the application allows correct definition of a user-defined vertical coordinate transformation.
Test Procedure: - User-defined concatenated coordinate transformation. Test Purpose: To verify that the application allows correct definition of a user-defined concatenated coordinate transformation. The order in which the steps of the concatenated coordinate transformation are entered is not critical as long as the step number is correct, although that given in the Test Dataset is recommended.
A number of conversion methods commonly used in the energy industry are tested using realistic coordinate values see Appendix E. If the software does not follow this recommended method naming, see the individual Test Procedure descriptions that follow for tips for some common alternative names for the same method. Test Procedures for which the method is not supported by the software should be reported as such in the Test, and the Test Data excluded from the testing process.
Software is not required to use the formulae published by EPSG, but its algorithms are expected to give results which are not significantly different see individual test tolerances to those using the EPSG formulae. The Test Data for this series comprises input and output files with each test point on a separate row in the data array. Points to be input for the forward and reverse calculations are generally interleaved in adjacent rows.
All points in the input file should be tested in the direction dictated by the input fields. Round trip calculations from the converted coordinates back to the original should also be tested for the point or points indicated in the input file, with the final coordinates compared with the starting values.
The CRS to which input and output coordinates are referenced is given in the file headers see series data for associated parameters. Should software fail the Series tests and have passed the Series tests it should still be possible to run this series of tests.
In some cases there may be multiple CRSs to be tested for one Test Procedure or subtly different algorithms , in these cases the files are split into multiple parts. Coordinates of test points are given in the order and units that are described in the CRS definition. Latitude and longitude are given in decimal degree or grads representation, Decimal degree values for latitude are positive for the northern hemisphere, negative for the southern hemisphere, and values for longitude are positive for the eastern hemisphere, negative for the western hemisphere.
Each set of Test Data comprises a small number of points, mostly laid out in two perpendicular transects. These transects avoid the system origin. When considered appropriate, additional points or transects have been added.
The test points are divided into multiple subsets and data for testing both forward and reverse cases have been generated. The tests investigate computational behaviour of the method within and slightly beyond the reasonable area of use.
The Test Datasets are not exhaustive. Developers are expected to augment the data to test frequently encountered failure conditions boundary conditions, etc. Precision of Test Data is further described in Section 2. The Test Procedures in the series are designed for the conversion of individual points. If the software does not have the functionality to allow this it may be necessary to first create a project for each Test Procedure and to load the test points as if they were the locations of geoscience data.
This data includes forward and reverse calculations. Because of the importance of the Transverse Mercator projection method, this test is more extensive than for other methods. If the application allows these methods they should be included in this Test Procedure.
Results from both formulae are included in the Test Data. The Test Data is in two parts. The Test Data is in three parts, in which the grid coordinates are in different units. Note: There are two significantly different approaches to the handling of the ellipsoidal development of this map projection method.
These are often not clearly distinguished through the method name. These give significantly different results at locations away from the projection origin and should be considered to be different methods.
The latter is a known problem area in some applications. Note: Applications may define the map projection in different ways.
One variation is in the location at which false grid coordinates are applied. EPSG caters for two alternatives and considers these to be different methods — see Test Procedure below. Another variation involves how the initial line is defined. EPSG requires an azimuth value. An alternative approach is to define this azimuth through the coordinates of two points; this approach is not catered for by EPSG. EPSG caters for two alternatives and considers these to be different methods — see Test Procedure above.
Another variation involves the means by which the initial line is defined. Note: There are numerous polyconic methods available in the literature giving significantly different results. Applications may not handle CRSs using a prime meridian other than by default Greenwich. Should this be the case, this failure should be documented. However this name is ambiguous. Should that be the case the test may be run using a value of 0.
Several transformation methods commonly used in the energy industry are tested, using realistic coordinate values see Appendix E. The general process is to load the input file; transform coordinates from geogCRS 1 to geogCRS 2; transform the coordinates referenced to geogCRS 2 to geogCRS 1; compare the results with the values in the output file data array.
All points in the input file should be tested in the direction dictated by the. Round trip calculations from the transformed coordinates back to the original should also be tested for the point or points indicated in the input file, with final coordinates compared with the starting values. The CRSs to which input and output coordinates are referenced are given in the file headers see series data for associated parameters. In some cases there may be multiple CRSs usually 2D and 3D versions to be tested for one Test Procedure or subtly different algorithms.
In this case the files are split into multiple parts. The necessary transformations are given in Series Test Procedures horizontal and vertical. The Test Procedures in the series are designed for the transformation of individual points. If the software does not have the functionality to allow this, it may be necessary to first create a project for each Test Procedure and to load the test points as if they were the locations of geoscience data.
Furthermore, the Test Datasets are designed to test methods individually. This configuration does not test software behaviour for the selection of coordinate transformation method when several methods are available, for example in Australia where low, medium and high accuracy variants are promoted using 3-parameter geocentric translation, 7-parameter coordinate frame and NTv2 methods respectively.
Software might not allow a User to override the use of a higher accuracy method with a coordinate transformation using a lower accuracy method. This should be investigated without a specific Test Data set. Should this be the case, then this test cannot be conducted. The reason for failure should be stated. Output coordinates differ due to the different CRSs and transformations used in these tests. Test Procedure: - Position Vector geographic domain transformations.
Either the Position Vector transformation geog2D domain , method or Position Vector transformation geog3D domain method is acceptable. Horizontal coordinates obtained for those points with ellipsoidal heights significantly different from zero will be incorrect, whereas correct results may be generated for points with zero or near zero ellipsoidal heights.
For large ellipsoidal heights either positive or negative , the correct results are given by the geog3D EPSG method Results that are a match should be clearly documented in the report on the test results. The Test Data includes coordinate transformations of interest to the industry using different units of measure.
Output coordinates differ due to the different CRSs and coordinate transformations in these tests. Applications may use only one of these conventions within its coordinate transformation engine, but it is expected that it will accept coordinate transformation definitions for both methods and make the necessary adjustments internally.
Should the application fail to make these adjustments the output will be incorrect. Test Procedure: - Molodensky-Badekas geographic 2D domain transformations. Either the Molodensky- Badekas geog2D domain , method or Molodensky-Badekas geog3D domain method is acceptable. It is assumed that these files are included in the application being tested. If necessary, the gridded data files may be downloaded from NGS.
The Test Data is in either one or two parts, if application requires early-binding use parts 2 and 3 only, part 1 is for all other applications. For applications incorporating both methods, the results in the area of overlap should be checked.
Some of the test points are in the overlap areas and are common with those in Test Procedure See Figure 5 above. Any similarities in the grid-based transformations required by those defined in Test Procedure is coincidental, and not related.
The file to be used is determined by the latitude and longitude of each test point. If the application being tested does not have these grids bundled within it, they will need to be obtained from the National Geodetic Survey.
This should be handled internally within the application, as the EPSG and ISO convention of longitudes positive east should be presented to users.
For applications incorporating both methods the results in the area of overlap should be checked. Some of the test points are in the overlap areas and common with those in Test Procedure See figure 1 above. Results should be compared. However, the NTv2 transformation method is actively used in many countries outside of Canada where it originated.
The problem points appear to fall on certain parallels and meridians, such as 8. Points to test the The parallel of 8. The included task should suffice since failure at the selected meridian
0コメント