Tag Archives: sp_replrestart

SQL Holmes: The Case of the Fist-fighting Log Readers

And the Case of the Fist-Fighting Log Readers

Every once in a blue moon, you run across a problem that no one has seen before. Sometimes you can’t find anything all. Sometimes, you can only find unanswered forum questions. Sometimes, you find the worst thing imaginable: a forum post with someone replying to their own question with “Nevermind. Fixed it.” (HOW DID YOU FIX IT YODARULES1971?!?)

We had a similar experience a while back. Allow me to take you through it.

It started with a single, innocuous alert from one of our SQL Server Replication distributors:

DESCRIPTION: Replication-Replication Transaction-Log Reader Subsystem: agent SOMESERVER-SomeDB-6 failed. The process could not execute 'sp_replcmds' on 'SOMESERVER'.

Note: If you don’t have SQL Server Alerts set up on your instances, then you really really should.

We had two databases in this instance that were replicated. One was chugging along just fine, the other was giving the old log reader chestnut:

The process could not execute 'sp_replcmds' on 'SOMESERVER'. (Source: MSSQL_REPL, Error number: MSSQL_REPL20011) Get help: http://help/MSSQL_REPL20011

Experience has taught me that the most likely issues are the following:
1. Somebody changed something (Troubleshooting 101)
2. The owners of the databases involved in replication are wrong (usually not ‘sa’)
3. A database trigger somewhere was trying to make a change cross-database.

“I saw some smoke coming from the barn publication.”  -SQL Replication Monitor

Number 1 had happened, as we had deployed schema changes just prior to everything going south. Two and three however, did not. We could see the error in the replication log. It was trying to parse a Primary Key change that we had made. There was nothing obviously wrong with the command, so we moved on.

And Then We Tried to Fix It

After a few hours trying to troubleshoot, restarting the log readers, crying for our mommies, yadda yadda, the call was made that we would just reinitialize the publication. Ye ole’ replication wreck-n-restart.

It didn’t work. The publication would not reinitialize, failing with the same log reader error as before. This make sense as the log reader is shared among all the publications on an instance. It just couldn’t get over the command that couldn’t be parsed.

With few options left, we did the no-no and reset the log reader.

EXEC sp_replrestart

This is an internal command, run on the publisher, that is used when you need to restore a transactionally replicated database. It basically resets the LSN on the distributor to the highest value on the publisher, for our purposes skipping over the LSN/command that was causing the log reader’s issues.

This is going to be my costume for Halloween next year.

And it worked. Mostly. Replication started to flow again, but we needed to re-initialize the subscribers properly. Re-init was necessary as we’d just skipped all transactions that had accumulated between the time the issue occurred and the current time.

Yes, But What Does It MEAN?!?!

So, we had “resolved” the issue, but we still had no idea what the “issue” really was. After we had righted the ship, applied all dropped indexes at the subscriber we circled back around to determine root cause and why no one else seemed to have any idea what would cause this.

We went to our Junior DBA (Google) and got a pretty narrow range of responses, including the ones I listed before. No one seemed to be having the same issue as us, even with similar errors. So, we had to start at the bottom and work to the top.

Combing through the log reader error logs, one error was different than the others:

Cannot find an object ID for the replication system table 'cdc.change_tables'. Verify that the system table exists and is accessible by querying it directly. If it does exist, stop and restart the Log Reader Agent; if it does not exist, drop and reconfigure replication. (Source: MSSQLServer, Error number: 18807)

Well, that’s different. CDC is enabled on this database and used on quite a few tables. We checked, and the table cdc.change_tables existed; we wouldn’t get very far in life without it. The next step was to look at the CDC error log which returns the last 64 errors encountered.

SELECT * FROM sys.dm_cdc_errors

And in the sys.dm_cdc_errors table, we found this error:

Log scan process failed in processing a ddl log record. Refer to previous errors in the current session to identify the cause and correct any associated problems.

That seems pretty familiar. It was preceded by the following three errors:

Invalid length parameter passed to the RIGHT function.
Log Scan process failed in processing log records. Refer to previous errors in the current session to identify the cause and correct any associated problems.
Log scan process failed in processing a ddl log record. Refer to previous errors in the current session to identify the cause and correct any associated problems.

Different CDC, but I could see why you would be confused.

This points pretty squarely to CDC as the culprit. It uses a log reader with replication to replicate commands for capturing in CDC. Apparently, if it gets hung up, replication panics and just starts punching itself in the face. It’s an interesting design choice.

Way #938,308,121 To Break Replication

A few weeks later, when deploying more schema changes the issue resurfaced. Replication dive-bombed, we got the sames alert and everything came to a halt. We simply went in and blew away CDC for the entire database. This may not be a choice for your environment, but we needed to nuke it from orbit, just be sure.

USE SomeDB
GO
EXEC sys.sp_cdc_disable_db
GO

And after executing sys.sp_cdc_disable_db, the issue went away. Success?

We haven’t re-implemented CDC on this database yet, as there are a lot more decisions to be made business-wise, so we don’t know if there’s something internally broken or what. At some point we will need to turn it back on, but when and to what degree (or fallout) isn’t clear.

What is clear is is that something in CDC goofed and it took replication down with it. Moral of the story is: If replication is having issues, make sure CDC isn’t having issues as well. Also, use CDC sparingly. Don’t just throw it on every table you have “just ’cause.”

-CJ Julius