6
0
mirror of https://github.com/FirebirdSQL/firebird-qa.git synced 2025-02-02 02:40:42 +01:00

Added/Updated functional\replication\test_failed_DDL_commands_can_be_replicated.py. Checked on 5.0.0.623, 4.0.1.2692 - both CS and SS. Both on Windows and Linux. Warning 'PytestAssertRewriteWarning' raises on Linux.

This commit is contained in:
zotov 2022-08-24 00:45:46 +03:00
parent de3747c730
commit a21f54edf0

View File

@ -17,6 +17,7 @@ DESCRIPTION:
During this delay, we check every second for replication log and search there line with number of last generated
segment (which was replicated and deleting finally).
We can assume that replication finished OK only when such line is found see ('POINT-1').
Then we check that both databases have same data in the user table 'test'.
Further, we invoke ISQL with executing auxiliary script for drop all DB objects on master (with '-nod' command switch).
After all objects will be dropped, we have to wait again until replica becomes actual with master (see 'POINT-2').
@ -25,413 +26,293 @@ DESCRIPTION:
The only difference in metadata must be 'CREATE DATABASE' statement with different DB names - we suppress it,
thus metadata difference must not be issued.
####################
### CRUCIAL NOTE ###
####################
Currently, 25.06.2021, there is bug in FB 4.x and 5.x which can be seen on SECOND run of this test: message with text
"ERROR: Record format with length 68 is not found for table TEST" will appear in it after inserting 1st record in master.
The reason of that is "dirty" pages that remain in RDB$RELATION_FIELDS both on master and replica after dropping table.
Following query show different data that appear in replica DB on 1st and 2nd run (just after table was created on master):
=======
set blobdisplay 6;
select rdb$descriptor as fmt_descr
from rdb$formats natural join rdb$relations where rdb$relation_name = 'TEST';
=======
This bug was explained by dimitr, see letters 25.06.2021 11:49 and 25.06.2021 16:56.
It will be fixed later.
The only workaround to solve this problem is to make SWEEP after all DB objects have been dropped.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
!NB! BOTH master and replica must be cleaned up by sweep!
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
################
### N O T E ###
################
Test assumes that master and replica DB have been created beforehand.
Also, it assumes that %FB_HOME%/replication.conf has been prepared with apropriate parameters for replication.
Particularly, name of directories and databases must have info about checked FB major version and ServerMode.
* verbose = true // in order to find out line with message that required segment was replicated
* section for master database with specified parameters:
journal_directory = "!fbt_repo!/tmp/fb-replication.!fb_major!.!server_mode!.journal"
journal_archive_directory = "!fbt_repo!/tmp/fb-replication.!fb_major!.!server_mode!.archive"
journal_archive_command = "copy $(pathname) $(archivepathname)"
journal_archive_timeout = 10
* section for replica database with specified parameter:
journal_source_directory = "!fbt_repo!/tmp/fb-replication.!fb_major!.!server_mode!.archive"
Master and replica databases must be created in "!fbt_repo! mp" directory and have names like these:
'fbt-main.fb40.SS.fdb'; 'fbt-repl.fb40.SS.fdb'; - for FB 4.x ('SS' = Super; 'CS' = Classic)
'fbt-main.fb50.SS.fdb'; 'fbt-repl.fb50.SS.fdb'; - for FB 5.x ('SS' = Super; 'CS' = Classic)
NB: fixed numeric value ('40' or '50') must be used for any minor FB version (4.0; 4.0.1; 4.1; 5.0; 5.1 etc)
These two databases must NOT be dropped in any of tests related to replication!
They are created and dropped in the batch scenario which prepares FB instance to be checked for each ServerMode
and make cleanup after it, i.e. when all tests will be completed.
NB. Currently this task was implemented only in Windows batch, thus test has attribute platform = 'Windows'.
Temporary comment. For debug purpoces:
1) find out SUFFIX of the name of FB service which is to be tested (e.g. 'DefaultInstance', '40SS' etc);
2) copy file %fbt-repo%/tests/functional/tabloid/batches/setup-fb-for-replication.bat.txt
to some place and rename it "*.bat";
3) open this .bat in editor and asjust value of 'fbt_repo' variable;
4) run: setup-fb-for-replication.bat [SUFFIX_OF_FB_SERVICE]
where SUFFIX_OF_FB_SERVICE is ending part of FB service which you want to check:
DefaultInstance ; 40ss ; 40cs ; 50ss ; 50cs etc
5) batch 'setup-fb-for-replication.bat' will:
* stop selected FB instance
* create test databases (in !fbt_repo!/tmp);
* prepare journal/archive sub-folders for replication (also in !fbt_repo!/tmp);
* replace %fb_home%/replication.conf with apropriate
* start selected FB instance
6) run this test (FB instance will be already launched by setup-fb-for-replication.bat):
%fpt_repo%/fbt-run2.bat dblevel-triggers-must-not-fire-on-replica.fbt 50ss, etc
Confirmed bug on 4.0.0.126, 4.0.1.2556: message "Cannot make field Y of table TEST NOT NULL because there are NULLs"
will be added into replication log and after this replication gets stuck.
Confirmed problem on 4.0.0.126, 4.0.1.2556: message:
"Cannot make field Y of table TEST NOT NULL because there are NULLs"
-- is added into replication log and after this replication gets stuck.
Checked on: 5.0.0.131 SS/CS; 4.0.1.2563 SS/CS.
FBTEST: functional.replication.failed_DDL_commands_can_be_replicated
NOTES:
[25.06.2021] pzotov
1. As of 25-JUN-2021, there was a problem in FB 4.x and 5.x which can be seen on SECOND run of this test: message with text
"ERROR: Record format with length 68 is not found for table TEST" will appear in it after inserting 1st record in master.
The reason of that is "dirty" pages that remain in RDB$RELATION_FIELDS both on master and replica after dropping table.
Following query show different data that appear in replica DB on 1st and 2nd run (just after table was created on master):
=======
set blobdisplay 6;
select rdb$descriptor as fmt_descr
from rdb$formats natural join rdb$relations where rdb$relation_name = 'TEST';
=======
This bug was explained by dimitr, see letters 25.06.2021 11:49 and 25.06.2021 16:56.
It will be fixed later.
The only workaround to solve this problem is to make SWEEP after all DB objects have been dropped.
BOTH master and replica must be cleaned up by sweep!
2. Test assumes that master and replica DB have been created beforehand.
These two databases must NOT be neither created nor dropped in any of tests related to replication.
They are created and dropped in the batch scenario which prepares FB instance to be checked for each ServerMode
and make cleanup after it, i.e. when all tests will be completed.
3. Test assumes that $FB_HOME/replication.conf has been prepared beforehand with apropriate parameters for replication.
Particularly, parameter verbose_logging must be 'true'. Otherwise we can not get reliable outcome about whether
segments were actually replicated or no (at least, currently - in aug-2022 - there is no way to get such info).
[23.08.2022] pzotov
4. Warning raises on Linux:
../../../usr/local/lib/python3.9/site-packages/_pytest/config/__init__.py:1126
/usr/local/lib/python3.9/site-packages/_pytest/config/__init__.py:1126:
PytestAssertRewriteWarning: Module already imported so cannot be rewritten: __editable___firebird_qa_0_17_0_finder
self._mark_plugins_for_rewrite(hook)
The reason currently is unknown.
Checked on 5.0.0.623, 4.0.1.2692 - both CS and SS. Both on Windows and Linux.
"""
import re
from difflib import unified_diff
from pathlib import Path
import time
import pytest
from firebird.qa import *
from firebird.driver import connect, create_database, DbWriteMode, ReplicaMode
# QA_GLOBALS -- dict, is defined in qa/plugin.py, obtain settings
# from act.files_dir/'test_config.ini':
repl_settings = QA_GLOBALS['replication']
MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG = int(repl_settings['max_time_for_wait_segment_in_log'])
MAIN_DB_ALIAS = repl_settings['main_db_alias']
REPL_DB_ALIAS = repl_settings['repl_db_alias']
db_main = db_factory( filename = '#' + MAIN_DB_ALIAS, do_not_create = True, do_not_drop = True)
db_repl = db_factory( filename = '#' + REPL_DB_ALIAS, do_not_create = True, do_not_drop = True)
substitutions = [('Start removing objects in:.*', 'Start removing objects'),
('Finish. Total objects removed: [1-9]\\d*', 'Finish. Total objects removed'),
('.* CREATE DATABASE .*', ''),
('FOUND message about replicated segment N .*', 'FOUND message about replicated segment')]
act_db_main = python_act('db_main', substitutions=substitutions)
act_db_repl = python_act('db_repl', substitutions=substitutions)
db = db_factory()
#--------------------------------------------
act = python_act('db', substitutions=substitutions)
def wait_for_data_in_replica( act_db_main: Action, max_allowed_time_for_wait, prefix_msg = '' ):
expected_stdout = """
POINT-1 FOUND message about replicated segment N 1.
replication_log = act_db_main.home_dir / 'replication.log'
MAIN_ID -3
MAIN_X 3333
MAIN_ID -2
MAIN_X <null>
MAIN_ID -1
MAIN_X 1111
replold_lines = []
with open(replication_log, 'r') as f:
replold_lines = f.readlines()
REPL_ID -3
REPL_X 3333
REPL_ID -2
REPL_X <null>
REPL_ID -1
REPL_X 1111
with act_db_main.db.connect(no_db_triggers = True) as con:
with con.cursor() as cur:
cur.execute("select rdb$get_context('SYSTEM','REPLICATION_SEQUENCE') from rdb$database")
last_generated_repl_segment = cur.fetchone()[0]
Start removing objects
Finish. Total objects removed
POINT-2 FOUND message about replicated segment N 2.
"""
# VERBOSE: Segment 1 (2582 bytes) is replicated in 1 second(s), deleting the file
# VERBOSE: Segment 2 (200 bytes) is replicated in 82 ms, deleting the file
p_successfully_replicated = re.compile( f'\\+\\s+verbose:\\s+segment\\s+{last_generated_repl_segment}\\s+\\(\\d+\\s+bytes\\)\\s+is\\s+replicated.*deleting', re.IGNORECASE)
# VERBOSE: Segment 16 replication failure at offset 33628
p_replication_failure = re.compile('segment\\s+\\d+\\s+replication\\s+failure', re.IGNORECASE)
found_required_message = False
found_replfail_message = False
found_common_error_msg = False
for i in range(0,max_allowed_time_for_wait):
time.sleep(1)
# Get content of fb_home replication.log _after_ isql finish:
with open(replication_log, 'r') as f:
diff_data = unified_diff(
replold_lines,
f.readlines()
)
for k,d in enumerate(diff_data):
if p_successfully_replicated.search(d):
print( (prefix_msg + ' ' if prefix_msg else '') + f'FOUND message about replicated segment N {last_generated_repl_segment}.' )
found_required_message = True
break
if p_replication_failure.search(d):
print( (prefix_msg + ' ' if prefix_msg else '') + '@@@ SEGMENT REPLICATION FAILURE @@@ ' + d )
found_replfail_message = True
break
if 'ERROR:' in d:
print( (prefix_msg + ' ' if prefix_msg else '') + '@@@ REPLICATION ERROR @@@ ' + d )
found_common_error_msg = True
break
if found_required_message or found_replfail_message or found_common_error_msg:
break
if not found_required_message:
print('UNEXPECTED RESULT: no message about replicating segment N {last_generated_repl_segment} for {max_allowed_time_for_wait} seconds.')
#--------------------------------------------
@pytest.mark.skip('FIXME: Not IMPLEMENTED')
@pytest.mark.version('>=4.0.1')
@pytest.mark.platform('Windows')
def test_1(act: Action):
pytest.fail("Not IMPLEMENTED")
def test_1(act_db_main: Action, act_db_repl: Action, capsys):
# test_script_1
#---
#
# import os
# import subprocess
# import re
# import difflib
# import shutil
# import time
#
# os.environ["ISC_USER"] = user_name
# os.environ["ISC_PASSWORD"] = user_password
#
# #####################################
# MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG = 65
# #####################################
#
# svc = fdb.services.connect(host='localhost', user=user_name, password=user_password)
# FB_HOME = svc.get_home_directory()
# svc.close()
#
# engine = db_conn.engine_version # 4.0; 4.1; 5.0 etc -- type float
# fb_major = 'fb' + str(engine)[:1] + '0' # 'fb40'; 'fb50'
#
# cur = db_conn.cursor()
# cur.execute("select rdb$config_value from rdb$config where upper(rdb$config_name) = upper('ServerMode')")
# server_mode = 'XX'
# for r in cur:
# if r[0] == 'Super':
# server_mode = 'SS'
# elif r[0] == 'SuperClassic':
# server_mode = 'SC'
# elif r[0] == 'Classic':
# server_mode = 'CS'
# cur.close()
#
# # 'fbt-main.fb50.ss.fdb' etc:
# db_main = os.path.join( context['temp_directory'], 'fbt-main.' + fb_major + '.' + server_mode + '.fdb' )
# db_repl = db_main.replace( 'fbt-main.', 'fbt-repl.')
#
# # Folders for journalling and archieving segments.
# repl_journal_dir = os.path.join( context['temp_directory'], 'fb-replication.' + fb_major + '.' + server_mode + '.journal' )
# repl_archive_dir = os.path.join( context['temp_directory'], 'fb-replication.' + fb_major + '.' + server_mode + '.archive' )
#
# db_conn.close()
#
# #--------------------------------------------
#
# def flush_and_close( file_handle ):
# # https://docs.python.org/2/library/os.html#os.fsync
# # If you're starting with a Python file object f,
# # first do f.flush(), and
# # then do os.fsync(f.fileno()), to ensure that all internal buffers associated with f are written to disk.
# global os
#
# file_handle.flush()
# if file_handle.mode not in ('r', 'rb') and file_handle.name != os.devnull:
# # otherwise: "OSError: [Errno 9] Bad file descriptor"!
# os.fsync(file_handle.fileno())
# file_handle.close()
#
# #--------------------------------------------
#
# def cleanup( f_names_list ):
# global os
# for i in range(len( f_names_list )):
# if type(f_names_list[i]) == file:
# del_name = f_names_list[i].name
# elif type(f_names_list[i]) == str:
# del_name = f_names_list[i]
# else:
# print('Unrecognized type of element:', f_names_list[i], ' - can not be treated as file.')
# del_name = None
#
# if del_name and os.path.isfile( del_name ):
# os.remove( del_name )
#
# #--------------------------------------------
#
# def wait_for_data_in_replica( fb_home, max_allowed_time_for_wait, db_main, prefix_msg = '' ):
#
# global re
# global difflib
# global time
#
# replold_lines = []
# with open( os.path.join(fb_home,'replication.log'), 'r') as f:
# replold_lines = f.readlines()
#
# con = fdb.connect( dsn = 'localhost:' + db_main, no_db_triggers = 1)
# cur = con.cursor()
# cur.execute("select rdb$get_context('SYSTEM','REPLICATION_SEQUENCE') from rdb$database")
# for r in cur:
# last_generated_repl_segment = r[0]
# cur.close()
# con.close()
#
# #print('last_generated_repl_segment:', last_generated_repl_segment)
#
# # VERBOSE: Segment 1 (2582 bytes) is replicated in 1 second(s), deleting the file
# p_successfully_replicated = re.compile( '\\+\\s+verbose:\\s+segment\\s+%(last_generated_repl_segment)s\\s+\\(\\d+\\s+bytes\\)\\s+is\\s+replicated.*deleting' % locals(), re.IGNORECASE)
#
# # VERBOSE: Segment 16 replication failure at offset 33628
# p_replication_failure = re.compile('segment\\s+\\d+\\s+replication\\s+failure', re.IGNORECASE)
#
# found_required_message = False
# found_replfail_message = False
# found_common_error_msg = False
#
# for i in range(0,max_allowed_time_for_wait):
#
# time.sleep(1)
#
# # Get content of fb_home replication.log _after_ isql finish:
# f_repllog_new = open( os.path.join(fb_home,'replication.log'), 'r')
# diff_data = difflib.unified_diff(
# replold_lines,
# f_repllog_new.readlines()
# )
# f_repllog_new.close()
#
# '''
# print('i=%d' % i)
# print('check replold_lines:')
# print('- - - - - - - - - - - -')
# for k in replold_lines:
# print(' ', k)
# print('- - - - - - - - - - - -')
#
# print('check diff_data:')
# print('- - - - - - - - - - - -')
# for k in diff_data:
# print(' ', k)
# print('- - - - - - - - - - - -')
# '''
#
# for k,d in enumerate(diff_data):
# if p_successfully_replicated.search(d):
# print( (prefix_msg + ' ' if prefix_msg else '') + 'FOUND message about replicated segment N %(last_generated_repl_segment)s.' % locals() )
# found_required_message = True
# break
#
# if p_replication_failure.search(d):
# print( (prefix_msg + ' ' if prefix_msg else '') + '@@@ SEGMENT REPLICATION FAILURE @@@ ' + d )
# found_replfail_message = True
# break
#
# if 'ERROR:' in d:
# print( (prefix_msg + ' ' if prefix_msg else '') + '@@@ REPLICATION ERROR @@@ ' + d )
# found_common_error_msg = True
# break
#
# if found_required_message or found_replfail_message or found_common_error_msg:
# break
#
# if not found_required_message:
# print('UNEXPECTED RESULT: no message about replicating segment N %(last_generated_repl_segment)s for %(max_allowed_time_for_wait)s seconds.' % locals())
#
# #--------------------------------------------
#
# sql_ddl = ''' -- do NOT use in this test -- set bail on;
# set list on;
#
# recreate table test(id bigint primary key, x int);
# insert into test(id, x) values(9223372036854775807, 1111);
# insert into test(id, x) values(9223372036854775806, null);
# insert into test(id, x) values(9223372036854775805, 3333);
# commit;
#
# -- must fail:
# alter table test add y int not null;
# commit;
#
# -- must fail:
# alter table test alter column x set not null;
# commit;
#
#
# create domain dm_nn int;
#
# recreate table test(id smallint primary key, x dm_nn);
# insert into test(id, x) values(-1, 1111);
# insert into test(id, x) values(-2, null);
# insert into test(id, x) values(-3, 3333);
# commit;
#
# -- must fail:
# alter domain dm_nn set not null;
# commit;
#
# -- connect 'localhost:%(db_main)s';
#
# select rdb$get_context('SYSTEM','REPLICATION_SEQUENCE') as last_generated_segment from rdb$database;
# commit;
# ''' % locals()
#
#
# f_sql_chk = open( os.path.join(context['temp_directory'],'tmp_gh_6907_test.sql'), 'w')
# f_sql_chk.write(sql_ddl)
# flush_and_close( f_sql_chk )
#
# f_sql_log = open( ''.join( (os.path.splitext(f_sql_chk.name)[0], '.log' ) ), 'w')
# f_sql_err = open( ''.join( (os.path.splitext(f_sql_chk.name)[0], '.err' ) ), 'w')
# subprocess.call( [ context['isql_path'], 'localhost:' + db_main, '-i', f_sql_chk.name ], stdout = f_sql_log, stderr = subprocess.STDOUT)
# flush_and_close( f_sql_log )
# flush_and_close( f_sql_err )
#
# last_generated_repl_segment = 0
#
# # do NOT check STDERR of initial SQL: it must contain errors
# # because we try to run DDL statement that for sure will FAIL:
# #with open(f_sql_err.name,'r') as f:
# # for line in f:
# # print('UNEXPECTED STDERR in initial SQL: ' + line)
# # MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG = 0
#
# if MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG: # ==> initial SQL script finished w/o errors
#
# ##############################################################################
# ### W A I T U N T I L R E P L I C A B E C O M E S A C T U A L ###
# ##############################################################################
# wait_for_data_in_replica( FB_HOME, MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG, db_main, 'POINT-1' )
#
# runProgram('isql', ['localhost:' + db_main, '-nod'], 'set list on; select t.id as main_id, t.x as main_x from rdb$database r left join test t on 1=1 order by t.id;')
# runProgram('isql', ['localhost:' + db_repl, '-nod'], 'set list on; select t.id as repl_id, t.x as repl_x from rdb$database r left join test t on 1=1 order by t.id;')
#
# # return initial state of master DB:
# # remove all DB objects (tables, views, ...):
# # --------------------------------------------
# sql_clean_ddl = os.path.join(context['files_location'],'drop-all-db-objects.sql')
#
# f_clean_log=open( os.path.join(context['temp_directory'],'tmp_gh_6907_drop-all-db-objects.log'), 'w')
# f_clean_err=open( ''.join( ( os.path.splitext(f_clean_log.name)[0], '.err') ), 'w')
# subprocess.call( [context['isql_path'], 'localhost:' + db_main, '-q', '-nod', '-i', sql_clean_ddl], stdout=f_clean_log, stderr=f_clean_err )
# flush_and_close(f_clean_log)
# flush_and_close(f_clean_err)
#
# with open(f_clean_err.name,'r') as f:
# for line in f:
# print('UNEXPECTED STDERR in cleanup SQL: ' + line)
# MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG = 0
#
# with open(f_clean_log.name,'r') as f:
# for line in f:
# # show number of dropped objects
# print(line)
#
# if MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG: # ==> cleanup SQL script finished w/o errors
#
# ##############################################################################
# ### W A I T U N T I L R E P L I C A B E C O M E S A C T U A L ###
# ##############################################################################
# wait_for_data_in_replica( FB_HOME, MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG, db_main, 'POINT-2' )
#
# f_main_meta_sql=open( os.path.join(context['temp_directory'],'tmp_gh_6907_db_main_meta.sql'), 'w')
# subprocess.call( [context['isql_path'], 'localhost:' + db_main, '-q', '-nod', '-ch', 'utf8', '-x'], stdout=f_main_meta_sql, stderr=subprocess.STDOUT )
# flush_and_close( f_main_meta_sql )
#
# f_repl_meta_sql=open( os.path.join(context['temp_directory'],'tmp_gh_6907_db_repl_meta.sql'), 'w')
# subprocess.call( [context['isql_path'], 'localhost:' + db_repl, '-q', '-nod', '-ch', 'utf8', '-x'], stdout=f_repl_meta_sql, stderr=subprocess.STDOUT )
# flush_and_close( f_repl_meta_sql )
#
# db_main_meta=open(f_main_meta_sql.name, 'r')
# db_repl_meta=open(f_repl_meta_sql.name, 'r')
#
# diffmeta = ''.join(difflib.unified_diff(
# db_main_meta.readlines(),
# db_repl_meta.readlines()
# ))
# db_main_meta.close()
# db_repl_meta.close()
#
# f_meta_diff=open( os.path.join(context['temp_directory'],'tmp_gh_6907_db_meta_diff.txt'), 'w', buffering = 0)
# f_meta_diff.write(diffmeta)
# flush_and_close( f_meta_diff )
#
# # Following must issue only TWO rows:
# # UNEXPECTED METADATA DIFF.: -/* CREATE DATABASE 'localhost:[db_main]' ... */
# # UNEXPECTED METADATA DIFF.: -/* CREATE DATABASE 'localhost:[db_repl]' ... */
# # Only thes lines will be suppressed further (see subst. section):
# with open(f_meta_diff.name, 'r') as f:
# for line in f:
# if line[:1] in ('-', '+') and line[:3] not in ('---','+++'):
# print('UNEXPECTED METADATA DIFF.: ' + line)
#
# ######################
# ### A C H T U N G ###
# ######################
# # MANDATORY, OTHERWISE REPLICATION GETS STUCK ON SECOND RUN OF THIS TEST
# # WITH 'ERROR: Record format with length NN is not found for table TEST':
# runProgram('gfix', ['-sweep', 'localhost:' + db_repl])
# runProgram('gfix', ['-sweep', 'localhost:' + db_main])
# #######################
#
#
# # cleanup:
# ##########
# cleanup( (f_sql_chk, f_sql_log, f_sql_err,f_clean_log,f_clean_err,f_main_meta_sql,f_repl_meta_sql,f_meta_diff) )
#
#---
###################
somewhat_failed = 1
###################
try:
sql_init = f'''
-- do NOT use in this test: -- set bail on;
set list on;
recreate table test(id bigint primary key, x int);
insert into test(id, x) values(9223372036854775807, 1111);
insert into test(id, x) values(9223372036854775806, null);
insert into test(id, x) values(9223372036854775805, 3333);
commit;
-- must fail with SQLSTATE = 22006:
alter table test add y int not null;
commit;
-- must fail with SQLSTATE = 22006:
alter table test alter column x set not null;
commit;
create domain dm_nn int;
recreate table test(id smallint primary key, x dm_nn);
insert into test(id, x) values(-1, 1111);
insert into test(id, x) values(-2, null);
insert into test(id, x) values(-3, 3333);
commit;
-- must fail with SQLSTATE = 22006:
alter domain dm_nn set not null;
commit;
'''
# do NOT check STDERR of initial SQL: it must contain errors
# because we try to run DDL statement that for sure will FAIL:
# We *expect* errors in ISQL, so have to assign use combine_output=True and assign result to return_code:
return_code = act_db_main.isql(switches=['-q'], input = sql_init, combine_output = True)
act_db_main.expected_stdout = 'POINT-1 FOUND message about replicated segment N 1.'
##############################################################################
### W A I T U N T I L R E P L I C A B E C O M E S A C T U A L ###
##############################################################################
wait_for_data_in_replica( act_db_main, MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG, 'POINT-1' )
act_db_main.stdout = capsys.readouterr().out
assert act_db_main.clean_stdout == act_db_main.clean_expected_stdout
act_db_main.reset()
#---------------------------------------------------
sql_chk = '''
set list on;
set count on;
select rdb$get_context('SYSTEM','REPLICA_MODE') replica_mode, t.id, t.x
from test t
order by t.id;
'''
for a in (act_db_main,act_db_repl):
db_repl_mode = '<null>' if a == act_db_main else 'READ-ONLY'
a.expected_stdout = f"""
REPLICA_MODE {db_repl_mode}
ID -3
X 3333
REPLICA_MODE {db_repl_mode}
ID -2
X <null>
REPLICA_MODE {db_repl_mode}
ID -1
X 1111
Records affected: 3
"""
a.isql(switches=['-q'], input = sql_chk)
assert a.clean_stdout == a.clean_expected_stdout
a.reset()
#-------------------------------------------------------
# return initial state of master DB:
# remove all DB objects (tables, views, ...):
#
db_main_meta, db_repl_meta = '', ''
for a in (act_db_main,act_db_repl):
if a == act_db_main:
sql_clean = (a.files_dir / 'drop-all-db-objects.sql').read_text()
a.expected_stdout = """
Start removing objects
Finish. Total objects removed
"""
a.isql(switches=['-q', '-nod'], input = sql_clean, combine_output = True)
assert a.clean_stdout == a.clean_expected_stdout
a.reset()
a.expected_stdout = 'POINT-2 FOUND message about replicated segment'
##############################################################################
### W A I T U N T I L R E P L I C A B E C O M E S A C T U A L ###
##############################################################################
wait_for_data_in_replica( a, MAX_TIME_FOR_WAIT_SEGMENT_IN_LOG, 'POINT-2' )
a.stdout = capsys.readouterr().out
assert a.clean_stdout == a.clean_expected_stdout
a.reset()
db_main_meta = a.extract_meta(charset = 'utf8', io_enc = 'utf8')
else:
db_repl_meta = a.extract_meta(charset = 'utf8', io_enc = 'utf8')
######################
### A C H T U N G ###
######################
# MANDATORY, OTHERWISE REPLICATION GETS STUCK ON SECOND RUN OF THIS TEST
# WITH 'ERROR: Record format with length NN is not found for table TEST':
a.gfix(switches=['-sweep', a.db.dsn])
# Final point: metadata must become equal:
#
diff_meta = ''.join(unified_diff( \
[x for x in db_main_meta.splitlines() if 'CREATE DATABASE' not in x],
[x for x in db_repl_meta.splitlines() if 'CREATE DATABASE' not in x])
)
assert diff_meta == ''
###################
somewhat_failed = 0
###################
finally:
pass
if somewhat_failed:
# If any previous assert failed, we have to RECREATE both master and slave databases.
# Otherwise further execution of this test or other replication-related tests most likely will fail.
for a in (act_db_main,act_db_repl):
d = a.db.db_path
a.db.drop()
dbx = create_database(str(d), user = a.db.user)
dbx.close()
with a.connect_server() as srv:
srv.database.set_write_mode(database = d, mode=DbWriteMode.ASYNC)
srv.database.set_sweep_interval(database = d, interval = 0)
if a == act_db_repl:
srv.database.set_replica_mode(database = d, mode = ReplicaMode.READ_ONLY)
else:
with a.db.connect() as con:
con.execute_immediate('alter database enable publication')
con.execute_immediate('alter database include all to publication')
con.commit()
assert somewhat_failed == 0