A simple protocol I often use is a header/data system like:
//header
uint8_t flag; //number that is start of packet
uint8_t version; //protocol version number or type of packet
uint32_t bytes; //number of bytes in data packet
uint32_t headerCRC; //CRC of the header
//Data packet
uint8_t data[];
uint32_t dataCRC;
A Packet is a header followed by a the data
This allows me to scan for the flag and then check for valid header using CRC if found then I parse data. If header is not valid I scan for flag until I find a valid header.
Depending on what you need you can adjust the number of bits per field as needed. I usually leave the CRC as last field in header/data such that the CRC can be calculated as data is sent (while streaming).
The version number is important, too often people change protocol versions (use checksum, or use different field bit sizes) by having version you can support multiple versions as needed, so always include a version number. Sometimes if I am tight on space I will skip the flag byte and then just check every byte for valid header. This all depends on what you need....
I have seen people try to optimize packet sizes, for example in the above if you need to send 4 bytes of data the over head is huge relative speaking. However in my experience every time someone has "optimized" their protocol they end up regretting it, especially if they did not include a version number.
With TCP/IP however the data is guaranteed to be delivered with no errors. Thus even sending ASCII strings where the NULL char is the separator will work just fine. Thus for TCP/IP I often stick with ASIC and JSON as it is easy to parse and debug.